Proton upsets in LSI memories in space
NASA Technical Reports Server (NTRS)
Mcnulty, P. J.; Wyatt, R. C.; Filz, R. C.; Rothwell, P. L.; Farrell, G. E.
1980-01-01
Two types of large scale integrated dynamic random access memory devices were tested and found to be subject to soft errors when exposed to protons incident at energies between 18 and 130 MeV. These errors are shown to differ significantly from those induced in the same devices by alphas from an Am-241 source. There is considerable variation among devices in their sensitivity to proton-induced soft errors, even among devices of the same type. For protons incident at 130 MeV, the soft error cross sections measured in these experiments varied from 10 to the -8th to 10 to the -6th sq cm/proton. For individual devices, however, the soft error cross section consistently increased with beam energy from 18-130 MeV. Analysis indicates that the soft errors induced by energetic protons result from spallation interactions between the incident protons and the nuclei of the atoms comprising the device. Because energetic protons are the most numerous of both the galactic and solar cosmic rays and form the inner radiation belt, proton-induced soft errors have potentially serious implications for many electronic systems flown in space.
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
NASA Astrophysics Data System (ADS)
Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi
2016-08-01
This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
Alpha particle-induced soft errors in microelectronic devices. I
NASA Astrophysics Data System (ADS)
Redman, D. J.; Sega, R. M.; Joseph, R.
1980-03-01
The article provides a tutorial review and trend assessment of the problem of alpha particle-induced soft errors in VLSI memories. Attention is given to an analysis of the design evolution of modern ICs, and the characteristics of alpha particles and their origin in IC packaging are reviewed. Finally, the process of an alpha particle penetrating an IC is examined.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs
NASA Astrophysics Data System (ADS)
Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.
1982-12-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.
Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment
NASA Astrophysics Data System (ADS)
Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.
2016-11-01
This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
Justification of Estimates for Fiscal Year 1983 Submitted to Congress.
1982-02-01
hierarchies to aid software production; completion of the components of an adaptive suspension vehicle including a storage energy unit, hydraulics, laser...and corrosion (long storage times), and radiation-induced breakdown. Solid- lubricated main engine bearings for cruise missile engines would offer...environments will cause "soft error" (computational and memory storage errors) in advanced microelectronic circuits. Research on high-speed, low-power
Hubert, G; Regis, D; Cheminet, A; Gatti, M; Lacoste, V
2014-10-01
Particles originating from primary cosmic radiation, which hit the Earth's atmosphere give rise to a complex field of secondary particles. These particles include neutrons, protons, muons, pions, etc. Since the 1980s it has been known that terrestrial cosmic rays can penetrate the natural shielding of buildings, equipment and circuit package and induce soft errors in integrated circuits. Recently, research has shown that commercial static random access memories are now so small and sufficiently sensitive that single event upsets (SEUs) may be induced from the electronic stopping of a proton. With continued advancements in process size, this downward trend in sensitivity is expected to continue. Then, muon soft errors have been predicted for nano-electronics. This paper describes the effects in the specific cases such as neutron-, proton- and muon-induced SEU observed in complementary metal-oxide semiconductor. The results will allow investigating the technology node sensitivity along the scaling trend. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An Investigation into Soft Error Detection Efficiency at Operating System Level
Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894
An investigation into soft error detection efficiency at operating system level.
Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Design Techniques for Power-Aware Combinational Logic SER Mitigation
NASA Astrophysics Data System (ADS)
Mahatme, Nihaar N.
The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogunmolu, O; Gans, N; Jiang, S
Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less
Evaluating Application Resilience with XRay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sui; Bronevetsky, Greg; Li, Bin
2015-05-07
The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less
A Quatro-Based 65-nm Flip-Flop Circuit for Soft-Error Resilience
NASA Astrophysics Data System (ADS)
Li, Y.-Q.; Wang, H.-B.; Liu, R.; Chen, L.; Nofal, I.; Shi, S.-T.; He, A.-L.; Guo, G.; Baeg, S. H.; Wen, S.-J.; Wong, R.; Chen, M.; Wu, Q.
2017-06-01
A flip-flop circuit hardened against soft errors is presented in this paper. This design is an improved version of Quatro for further enhanced soft-error resilience by integrating the guard-gate technique. The proposed design, as well as reference Quatro and regular flip-flops, was implemented and manufactured in a 65-nm CMOS bulk technology. Experimental characterization results of their alpha and heavy ions soft-error rates verified the superior hardening performance of the proposed design over the other two circuits.
NASA Technical Reports Server (NTRS)
Tasca, D. M.
1981-01-01
Single event upset phenomena are discussed, taking into account cosmic ray induced errors in IIL microprocessors and logic devices, single event upsets in NMOS microprocessors, a prediction model for bipolar RAMs in a high energy ion/proton environment, the search for neutron-induced hard errors in VLSI structures, soft errors due to protons in the radiation belt, and the use of an ion microbeam to study single event upsets in microcircuits. Basic mechanisms in materials and devices are examined, giving attention to gamma induced noise in CCD's, the annealing of MOS capacitors, an analysis of photobleaching techniques for the radiation hardening of fiber optic data links, a hardened field insulator, the simulation of radiation damage in solids, and the manufacturing of radiation resistant optical fibers. Energy deposition and dosimetry is considered along with SGEMP/IEMP, radiation effects in devices, space radiation effects and spacecraft charging, EMP/SREMP, and aspects of fabrication, testing, and hardness assurance.
2012-01-01
Background Although proton radiotherapy is a promising new approach for cancer patients, functional interference is a concern for patients with implantable cardioverter defibrillators (ICDs). The purpose of this study was to clarify the influence of secondary neutrons induced by proton radiotherapy on ICDs. Methods The experimental set-up simulated proton radiotherapy for a patient with an ICD. Four new ICDs were placed 0.3 cm laterally and 3 cm distally outside the radiation field in order to evaluate the influence of secondary neutrons. The cumulative in-field radiation dose was 107 Gy over 10 sessions of irradiation with a dose rate of 2 Gy/min and a field size of 10 × 10 cm2. After each radiation fraction, interference with the ICD by the therapy was analyzed by an ICD programmer. The dose distributions of secondary neutrons were estimated by Monte-Carlo simulation. Results The frequency of the power-on reset, the most serious soft error where the programmed pacing mode changes temporarily to a safety back-up mode, was 1 per approximately 50 Gy. The total number of soft errors logged in all devices was 29, which was a rate of 1 soft error per approximately 15 Gy. No permanent device malfunctions were detected. The calculated dose of secondary neutrons per 1 Gy proton dose in the phantom was approximately 1.3-8.9 mSv/Gy. Conclusions With the present experimental settings, the probability was approximately 1 power-on reset per 50 Gy, which was below the dose level (60-80 Gy) generally used in proton radiotherapy. Further quantitative analysis in various settings is needed to establish guidelines regarding proton radiotherapy for cancer patients with ICDs. PMID:22284700
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
A device for high-throughput monitoring of degradation in soft tissue samples.
Tzeranis, D S; Panagiotopoulos, I; Gkouma, S; Kanakaris, G; Georgiou, N; Vaindirlis, N; Vasileiou, G; Neidlin, M; Gkousioudi, A; Spitas, V; Macheras, G A; Alexopoulos, L G
2018-06-06
This work describes the design and validation of a novel device, the High-Throughput Degradation Monitoring Device (HDD), for monitoring the degradation of 24 soft tissue samples over incubation periods of several days inside a cell culture incubator. The device quantifies sample degradation by monitoring its deformation induced by a static gravity load. Initial instrument design and experimental protocol development focused on quantifying cartilage degeneration. Characterization of measurement errors, caused mainly by thermal transients and by translating the instrument sensor, demonstrated that HDD can quantify sample degradation with <6 μm precision and <10 μm temperature-induced errors. HDD capabilities were evaluated in a pilot study that monitored the degradation of fresh ex vivo human cartilage samples by collagenase solutions over three days. HDD could robustly resolve the effects of collagenase concentration as small as 0.5 mg/ml. Careful sample preparation resulted in measurements that did not suffer from donor-to-donor variation (coefficient of variance <70%). Due to its unique combination of sample throughput, measurement precision, temporal sampling and experimental versality, HDD provides a novel biomechanics-based experimental platform for quantifying the effects of proteins (cytokines, growth factors, enzymes, antibodies) or small molecules on the degradation of soft tissues or tissue engineering constructs. Thereby, HDD can complement established tools and in vitro models in important applications including drug screening and biomaterial development. Copyright © 2018 Elsevier Ltd. All rights reserved.
Monte Carlo simulation of particle-induced bit upsets
NASA Astrophysics Data System (ADS)
Wrobel, Frédéric; Touboul, Antoine; Vaillé, Jean-Roch; Boch, Jérôme; Saigné, Frédéric
2017-09-01
We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER) for a given device in a given environment.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Preisig, James C
2005-07-01
Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.
Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.
Mehranian, Abolfazl; Zaidi, Habib
2015-04-01
Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David J; Mueller, Frank; Engelmann, Christian
Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less
Practicality of Evaluating Soft Errors in Commercial sub-90 nm CMOS for Space Applications
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; LaBel, Kenneth A.
2010-01-01
The purpose of this presentation is to: Highlight space memory evaluation evolution, Review recent developments regarding low-energy proton direct ionization soft errors, Assess current space memory evaluation challenges, including increase of non-volatile technology choices, and Discuss related testing and evaluation complexities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, K.; Ohmi, K.; Tottori University Electronic Display Research Center, 101 Minami4-chome, Koyama-cho, Tottori-shi, Tottori 680-8551
With increasing density of memory devices, the issue of generating soft errors by cosmic rays is becoming more and more serious. Therefore, the irradiation resistance of resistance random access memory (ReRAM) to cosmic radiation has to be elucidated for practical use. In this paper, we investigated the data retention characteristics of ReRAM against ultraviolet irradiation with a Pt/NiO/ITO structure. Soft errors were confirmed to be caused by ultraviolet irradiation in both low- and high-resistance states. An analysis of the wavelength dependence of light irradiation on data retention characteristics suggested that electronic excitation from the valence to the conduction band andmore » to the energy level generated due to the introduction of oxygen vacancies caused the errors. Based on a statistically estimated soft error rates, the errors were suggested to be caused by the cohesion and dispersion of oxygen vacancies owing to the generation of electron-hole pairs and valence changes by the ultraviolet irradiation.« less
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.
Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan
2018-05-21
This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.
Using Digital Image Correlation to Characterize Local Strains on Vascular Tissue Specimens.
Zhou, Boran; Ravindran, Suraj; Ferdous, Jahid; Kidane, Addis; Sutton, Michael A; Shazly, Tarek
2016-01-24
Characterization of the mechanical behavior of biological and engineered soft tissues is a central component of fundamental biomedical research and product development. Stress-strain relationships are typically obtained from mechanical testing data to enable comparative assessment among samples and in some cases identification of constitutive mechanical properties. However, errors may be introduced through the use of average strain measures, as significant heterogeneity in the strain field may result from geometrical non-uniformity of the sample and stress concentrations induced by mounting/gripping of soft tissues within the test system. When strain field heterogeneity is significant, accurate assessment of the sample mechanical response requires measurement of local strains. This study demonstrates a novel biomechanical testing protocol for calculating local surface strains using a mechanical testing device coupled with a high resolution camera and a digital image correlation technique. A series of sample surface images are acquired and then analyzed to quantify the local surface strain of a vascular tissue specimen subjected to ramped uniaxial loading. This approach can improve accuracy in experimental vascular biomechanics and has potential for broader use among other native soft tissues, engineered soft tissues, and soft hydrogel/polymeric materials. In the video, we demonstrate how to set up the system components and perform a complete experiment on native vascular tissue.
CLEAR: Cross-Layer Exploration for Architecting Resilience
2017-03-01
benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
NASA Astrophysics Data System (ADS)
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement results show that soft errors in the memories increase proportionally with the neutron flux, whereas they decrease with increasing the supply voltages. NISC design considerations include the effects of device scaling, 10B content in the BPSG layer, incoming neutron energy, and critical charge of the node for this dissertation. NISCSAT simulations were performed with various memory node models to account these effects. Device scaling simulations showed that any further increase in the thickness of the BPSG layer beyond 2 mum causes self-shielding of the incoming neutrons due to the BPSG layer and results in lower detection efficiencies. Moreover, if the BPSG layer is located more than 4 mum apart from the depletion region in the node, there are no soft errors in the node due to the fact that both of the reaction products have lower ranges in the silicon or any possible node layers. Calculation results regarding the critical charge indicated that the mean charge deposition of the reaction products in the sensitive volume of the node is about 15 fC. It is evident that the NISC design should have a memory architecture with a critical charge of 15 fC or less to obtain higher detection efficiencies. Moreover, the sensitive volume should be placed in close proximity to the BPSG layers so that its location would be within the range of alpha and 7Li particles. Results showed that the distance between the BPSG layer and the sensitive volume should be less than 2 mum to increase the detection efficiency of the NISC. Incoming neutron energy was also investigated by simulations and the results obtained from these simulations showed that NISC neutron detection efficiency is related with the neutron cross-sections of 10B (n,alpha) 7Li reaction, e.g., ratio of the thermal (0.0253 eV) to fast (2 MeV) neutron detection efficiencies is approximately equal to 8000:1. Environmental conditions and their effects on the NISC performance were also studied in this research. Cosmic rays were modeled and simulated via NISCSAT to investigate detection reliability of the NISC. Simulation results show that cosmic rays account for less than 2 % of the soft errors for the thermal neutron detection. On the other hand, fast neutron detection by the NISC, which already has a poor efficiency due to the low neutron cross-sections, becomes almost impossible at higher altitudes where the cosmic ray fluxes and their energies are higher. NISCSAT simulations regarding soft error dependency of the NISC for temperature and electromagnetic fields show that there are no significant effects in the NISC detection efficiency. Furthermore, the detection efficiency of the NISC decreases with both air humidity and use of moderators since the incoming neutrons scatter away before reaching the memory surface.
Oliveira-Santos, Thiago; Klaeser, Bernd; Weitzel, Thilo; Krause, Thomas; Nolte, Lutz-Peter; Peterhans, Matthias; Weber, Stefan
2011-01-01
Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.
Multi-Spectral Solar Telescope Array. II - Soft X-ray/EUV reflectivity of the multilayer mirrors
NASA Technical Reports Server (NTRS)
Barbee, Troy W., Jr.; Weed, J. W.; Hoover, Richard B.; Allen, Maxwell J.; Lindblom, Joakim F.; O'Neal, Ray H.; Kankelborg, Charles C.; Deforest, Craig E.; Paris, Elizabeth S.; Walker, Arthur B. C., Jr.
1991-01-01
The Multispectral Solar Telescope Array is a rocket-borne observatory which encompasses seven compact soft X-ray/EUV, multilayer-coated, and two compact far-UV, interference film-coated, Cassegrain and Ritchey-Chretien telescopes. Extensive measurements are presented on the efficiency and spectral bandpass of the X-ray/EUV telescopes. Attention is given to systematic errors and measurement errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana
The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less
Utilization of robotic-arm assisted total knee arthroplasty for soft tissue protection.
Sultan, Assem A; Piuzzi, Nicolas; Khlopas, Anton; Chughtai, Morad; Sodhi, Nipun; Mont, Michael A
2017-12-01
Despite the well-established success of total knee arthroplasty (TKA), iatrogenic ligamentous and soft tissue injuries are infrequent, but potential complications that can have devastating impact on clinical outcomes. These injuries are often related to technical errors and excessive soft tissue manipulation, particularly during bony resections. Recently, robotic-arm assisted TKA was introduced and demonstrated promising results with potential technical advantages over manual surgery in implant positioning and mechanical accuracy. Furthermore, soft tissue protection is an additional potential advantage offered by these systems that can reduce inadvertent human technical errors encountered during standard manual resections. Therefore, due to the relative paucity of literature, we attempted to answer the following questions: 1) does robotic-arm assisted TKA offer a technical advantage that allows enhanced soft tissue protection? 2) What is the available evidence about soft tissue protection? Recently introduced models of robotic-arm assisted TKA systems with advanced technology showed promising clinical outcomes and soft tissue protection in the short- and mid-term follow-up with results comparable or superior to manual TKA. In this review, we attempted to explore this dimension of robotics in TKA and investigate the soft tissue related complications currently reported in the literature.
Activation and Protection of Dendritic Cells in the Prostate Cancer Environment
2010-10-01
median survival time (in days) ± standard error of the mean (SEM). Skin graft survived for 11.0±0.7 days in control group and 15.8±1.1 days in the...potentiates allogeneic skin graft rejection and induces syngeneic graft rejection. Transplantation. 1998;65:1436-1446. 6. Jemal A, Siegel R, Xu J, Ward E...injections of ETA receptor inhibitor BQ-123 (for 10 days). Skin graft is soft and viable. 22 P.I.: Georgi Guruli; Award # W81XWH-05-1-0181
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for Sub-130 nm Technologies
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Thomas M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Hard sphere perturbation theory for thermodynamics of soft-sphere model liquid
NASA Astrophysics Data System (ADS)
Mon, K. K.
2001-09-01
It is a long-standing consensus in the literature that hard sphere perturbation theory (HSPT) is not accurate for dense soft sphere model liquids, interacting with repulsive r-n pair potentials for small n. In this paper, we show that if the intrinsic error of HSPT for soft sphere model liquids is accounted for, then this is not completely true. We present results for n=4, 6, 9, 12 which indicate that, even first order variational HSPT can provide free energy upper bounds to within a few percent at densities near freezing when corrected for the intrinsic error of the HSPT.
Full temperature single event upset characterization of two microprocessor technologies
NASA Technical Reports Server (NTRS)
Nichols, Donald K.; Coss, James R.; Smith, L. S.; Rax, Bernard; Huebner, Mark
1988-01-01
Data for the 9450 I3L bipolar microprocessor and the 80C86 CMOS/epi (vintage 1985) microprocessor are presented, showing single-event soft errors for the full MIL-SPEC temperature range of -55 to 125 C. These data show for the first time that the soft-error cross sections continue to decrease with decreasing temperature at subzero temperatures. The temperature dependence of the two parts, however, is very different.
New-Sum: A Novel Online ABFT Scheme For General Iterative Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram
Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
NASA Astrophysics Data System (ADS)
Yoshizawa, Masasumi; Nakamura, Yuuta; Ishiguro, Masataka; Moriya, Tadashi
2007-07-01
In this paper, we describe a method of compensating the attenuation of the ultrasound caused by soft tissue in the transducer vibration method for the measurement of the acoustic impedance of in vivo bone. In the in vivo measurement, the acoustic impedance of bone is measured through soft tissue; therefore, the amplitude of the ultrasound reflected from the bone is attenuated. This attenuation causes an error of the order of -20 to -30% when the acoustic impedance is determined from the measured signals. To compensate the attenuation, the attenuation coefficient and length of the soft tissue are measured by the transducer vibration method. In the experiment using a phantom, this method allows the measurement of the acoustic impedance typically with an error as small as -8 to 10%.
On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A
NASA Astrophysics Data System (ADS)
Blake, J. B.; Mandel, R.
1987-02-01
The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei
2016-09-26
High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
Popoviç, M; Biessels, G J; Isaacson, R L; Gispen, W H
2001-08-01
Diabetes mellitus is associated with disturbances of cognitive functioning. The aim of this study was to examine cognitive functioning in diabetic rats using the 'Can test', a novel spatial/object learning and memory task, without the use of aversive stimuli. Rats were trained to select a single rewarded can from seven cans. Mild water deprivation provided the motivation to obtain the reward (0.3 ml of water). After 5 days of baseline training, in which the rewarded can was marked by its surface and position in an open field, the animals were divided into two groups. Diabetes was induced in one group, by an intravenous injection of streptozotocin. Retention of baseline training was tested at 2-weekly intervals for 10 weeks. Next, two adapted versions of the task were used, with 4 days of training in each version. The rewarded can was a soft-drink can with coloured print. In a 'simple visual task' the soft-drink can was placed among six white cans, whereas in a 'complex visual task' it was placed among six soft-drink cans from different brands with distinct prints. In diabetic rats the number of correct responses was lower and number of reference and working memory errors higher than in controls in the various versions of the test. Switches between tasks and increases in task complexity accentuated the performance deficits, which may reflect an inability of diabetic rats to adapt behavioural strategies to the demands of the tasks.
Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei
2018-04-20
This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Lin, S.
1985-01-01
A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.
Testing a Novel 3D Printed Radiographic Imaging Device for Use in Forensic Odontology.
Newcomb, Tara L; Bruhn, Ann M; Giles, Bridget; Garcia, Hector M; Diawara, Norou
2017-01-01
There are specific challenges related to forensic dental radiology and difficulties in aligning X-ray equipment to teeth of interest. Researchers used 3D printing to create a new device, the combined holding and aiming device (CHAD), to address the positioning limitations of current dental X-ray devices. Participants (N = 24) used the CHAD, soft dental wax, and a modified external aiming device (MEAD) to determine device preference, radiographer's efficiency, and technique errors. Each participant exposed six X-rays per device for a total of 432 X-rays scored. A significant difference was found at the 0.05 level between the three devices (p = 0.0015), with the MEAD having the least amount of total errors and soft dental wax taking the least amount of time. Total errors were highest when participants used soft dental wax-both the MEAD and the CHAD performed best overall. Further research in forensic dental radiology and use of holding devices is needed. © 2016 American Academy of Forensic Sciences.
Soft tissue deformation for surgical simulation: a position-based dynamics approach.
Camara, Mafalda; Mayer, Erik; Darzi, Ara; Pratt, Philip
2016-06-01
To assist the rehearsal and planning of robot-assisted partial nephrectomy, a real-time simulation platform is presented that allows surgeons to visualise and interact with rapidly constructed patient-specific biomechanical models of the anatomical regions of interest. Coupled to a framework for volumetric deformation, the platform furthermore simulates intracorporeal 2D ultrasound image acquisition, using preoperative imaging as the data source. This not only facilitates the planning of optimal transducer trajectories and viewpoints, but can also act as a validation context for manually operated freehand 3D acquisitions and reconstructions. The simulation platform was implemented within the GPU-accelerated NVIDIA FleX position-based dynamics framework. In order to validate the model and determine material properties and other simulation parameter values, a porcine kidney with embedded fiducial beads was CT-scanned and segmented. Acquisitions for the rest position and three different levels of probe-induced deformation were collected. Optimal values of the cluster stiffness coefficients were determined for a range of different particle radii, where the objective function comprised the mean distance error between real and simulated fiducial positions over the sequence of deformations. The mean fiducial error at each deformation stage was found to be compatible with the level of ultrasound probe calibration error typically observed in clinical practice. Furthermore, the simulation exhibited unconditional stability on account of its use of clustered shape-matching constraints. A novel position-based dynamics implementation of soft tissue deformation has been shown to facilitate several desirable simulation characteristics: real-time performance, unconditional stability, rapid model construction enabling patient-specific behaviour and accuracy with respect to reference CT images.
NASA Technical Reports Server (NTRS)
Simons, M.
1978-01-01
Radiation effects in MOS devices and circuits are considered along with radiation effects in materials, space radiation effects and spacecraft charging, SGEMP, IEMP, EMP, fabrication of radiation-hardened devices, radiation effects in bipolar devices and circuits, simulation, energy deposition, and dosimetry. Attention is given to the rapid anneal of radiation-induced silicon-sapphire interface charge trapping, cosmic ray induced errors in MOS memory cells, a simple model for predicting radiation effects in MOS devices, the response of MNOS capacitors to ionizing radiation at 80 K, trapping effects in irradiated and avalanche-injected MOS capacitors, inelastic interactions of electrons with polystyrene, the photoelectron spectral yields generated by monochromatic soft X radiation, and electron transport in reactor materials.
Gutiérrez, J. J.; Russell, James K.
2016-01-01
Background. Cardiopulmonary resuscitation (CPR) feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin's back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%). Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p < 0.001). Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p < 0.001). Median error in rate was 0.9 cpm (1.0%), with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces. PMID:27999808
Resnick, C M; Dang, R R; Glick, S J; Padwa, B L
2017-03-01
Three-dimensional (3D) soft tissue prediction is replacing two-dimensional analysis in planning for orthognathic surgery. The accuracy of different computational models to predict soft tissue changes in 3D, however, is unclear. A retrospective pilot study was implemented to assess the accuracy of Dolphin 3D software in making these predictions. Seven patients who had a single-segment Le Fort I osteotomy and had preoperative (T 0 ) and >6-month postoperative (T 1 ) cone beam computed tomography (CBCT) scans and 3D photographs were included. The actual skeletal change was determined by subtracting the T 0 from the T 1 CBCT. 3D photographs were overlaid onto the T 0 CBCT and virtual skeletal movements equivalent to the achieved repositioning were applied using Dolphin 3D planner. A 3D soft tissue prediction (T P ) was generated and differences between the T P and T 1 images (error) were measured at 14 points and at the nasolabial angle. A mean linear prediction error of 2.91±2.16mm was found. The mean error at the nasolabial angle was 8.1±5.6°. In conclusion, the ability to accurately predict 3D soft tissue changes after Le Fort I osteotomy using Dolphin 3D software is limited. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashii, Haruko, E-mail: haruko@pmrc.tsukuba.ac.jp; Hashimoto, Takayuki; Okawa, Ayako
2013-03-01
Purpose: Radiation therapy for cancer may be required for patients with implantable cardiac devices. However, the influence of secondary neutrons or scattered irradiation from high-energy photons (≥10 MV) on implantable cardioverter-defibrillators (ICDs) is unclear. This study was performed to examine this issue in 2 ICD models. Methods and Materials: ICDs were positioned around a water phantom under conditions simulating clinical radiation therapy. The ICDs were not irradiated directly. A control ICD was positioned 140 cm from the irradiation isocenter. Fractional irradiation was performed with 18-MV and 10-MV photon beams to give cumulative in-field doses of 600 Gy and 1600 Gy,more » respectively. Errors were checked after each fraction. Soft errors were defined as severe (change to safety back-up mode), moderate (memory interference, no changes in device parameters), and minor (slight memory change, undetectable by computer). Results: Hard errors were not observed. For the older ICD model, the incidences of severe, moderate, and minor soft errors at 18 MV were 0.75, 0.5, and 0.83/50 Gy at the isocenter. The corresponding data for 10 MV were 0.094, 0.063, and 0 /50 Gy. For the newer ICD model at 18 MV, these data were 0.083, 2.3, and 5.8 /50 Gy. Moderate and minor errors occurred at 18 MV in control ICDs placed 140 cm from the isocenter. The error incidences were 0, 1, and 0 /600 Gy at the isocenter for the newer model, and 0, 1, and 6 /600Gy for the older model. At 10 MV, no errors occurred in control ICDs. Conclusions: ICD errors occurred more frequently at 18 MV irradiation, which suggests that the errors were mainly caused by secondary neutrons. Soft errors of ICDs were observed with high energy photon beams, but most were not critical in the newer model. These errors may occur even when the device is far from the irradiation field.« less
Ghorbani, Mahdi; Salahshour, Fateme; Haghparast, Abbas; Knaup, Courtney
2014-01-01
Purpose The aim of this study is to compare the dose in various soft tissues in brachytherapy with photon emitting sources. Material and methods 103Pd, 125I, 169Yb, 192Ir brachytherapy sources were simulated with MCNPX Monte Carlo code, and their dose rate constant and radial dose function were compared with the published data. A spherical phantom with 50 cm radius was simulated and the dose at various radial distances in adipose tissue, breast tissue, 4-component soft tissue, brain (grey/white matter), muscle (skeletal), lung tissue, blood (whole), 9-component soft tissue, and water were calculated. The absolute dose and relative dose difference with respect to 9-component soft tissue was obtained for various materials, sources, and distances. Results There was good agreement between the dosimetric parameters of the sources and the published data. Adipose tissue, breast tissue, 4-component soft tissue, and water showed the greatest difference in dose relative to the dose to the 9-component soft tissue. The other soft tissues showed lower dose differences. The dose difference was also higher for 103Pd source than for 125I, 169Yb, and 192Ir sources. Furthermore, greater distances from the source had higher relative dose differences and the effect can be justified due to the change in photon spectrum (softening or hardening) as photons traverse the phantom material. Conclusions The ignorance of soft tissue characteristics (density, composition, etc.) by treatment planning systems incorporates a significant error in dose delivery to the patient in brachytherapy with photon sources. The error depends on the type of soft tissue, brachytherapy source, as well as the distance from the source. PMID:24790623
Accuracy of three-dimensional facial soft tissue simulation in post-traumatic zygoma reconstruction.
Li, P; Zhou, Z W; Ren, J Y; Zhang, Y; Tian, W D; Tang, W
2016-12-01
The aim of this study was to evaluate the accuracy of novel software-CMF-preCADS-for the prediction of soft tissue changes following repositioning surgery for zygomatic fractures. Twenty patients who had sustained an isolated zygomatic fracture accompanied by facial deformity and who were treated with repositioning surgery participated in this study. Cone beam computed tomography (CBCT) scans and three-dimensional (3D) stereophotographs were acquired preoperatively and postoperatively. The 3D skeletal model from the preoperative CBCT data was matched with the postoperative one, and the fractured zygomatic fragments were segmented and aligned to the postoperative position for prediction. Then, the predicted model was matched with the postoperative 3D stereophotograph for quantification of the simulation error. The mean absolute error in the zygomatic soft tissue region between the predicted model and the real one was 1.42±1.56mm for all cases. The accuracy of the prediction (mean absolute error ≤2mm) was 87%. In the subjective assessment it was found that the majority of evaluators considered the predicted model and the postoperative model to be 'very similar'. CMF-preCADS software can provide a realistic, accurate prediction of the facial soft tissue appearance after repositioning surgery for zygomatic fractures. The reliability of this software for other types of repositioning surgery for maxillofacial fractures should be validated in the future. Copyright © 2016. Published by Elsevier Ltd.
Grazing Incidence Wavefront Sensing and Verification of X-Ray Optics Performance
NASA Technical Reports Server (NTRS)
Saha, Timo T.; Rohrbach, Scott; Zhang, William W.
2011-01-01
Evaluation of interferometrically measured mirror metrology data and characterization of a telescope wavefront can be powerful tools in understanding of image characteristics of an x-ray optical system. In the development of soft x-ray telescope for the International X-Ray Observatory (IXO), we have developed new approaches to support the telescope development process. Interferometrically measuring the optical components over all relevant spatial frequencies can be used to evaluate and predict the performance of an x-ray telescope. Typically, the mirrors are measured using a mount that minimizes the mount and gravity induced errors. In the assembly and mounting process the shape of the mirror segments can dramatically change. We have developed wavefront sensing techniques suitable for the x-ray optical components to aid us in the characterization and evaluation of these changes. Hartmann sensing of a telescope and its components is a simple method that can be used to evaluate low order mirror surface errors and alignment errors. Phase retrieval techniques can also be used to assess and estimate the low order axial errors of the primary and secondary mirror segments. In this paper we describe the mathematical foundation of our Hartmann and phase retrieval sensing techniques. We show how these techniques can be used in the evaluation and performance prediction process of x-ray telescopes.
A device for characterising the mechanical properties of the plantar soft tissue of the foot.
Parker, D; Cooper, G; Pearson, S; Crofts, G; Howard, D; Busby, P; Nester, C
2015-11-01
The plantar soft tissue is a highly functional viscoelastic structure involved in transferring load to the human body during walking. A Soft Tissue Response Imaging Device was developed to apply a vertical compression to the plantar soft tissue whilst measuring the mechanical response via a combined load cell and ultrasound imaging arrangement. Accuracy of motion compared to input profiles; validation of the response measured for standard materials in compression; variability of force and displacement measures for consecutive compressive cycles; and implementation in vivo with five healthy participants. Static displacement displayed average error of 0.04 mm (range of 15 mm), and static load displayed average error of 0.15 N (range of 250 N). Validation tests showed acceptable agreement compared to a Houndsfield tensometer for both displacement (CMC > 0.99 RMSE > 0.18 mm) and load (CMC > 0.95 RMSE < 4.86 N). Device motion was highly repeatable for bench-top tests (ICC = 0.99) and participant trials (CMC = 1.00). Soft tissue response was found repeatable for intra (CMC > 0.98) and inter trials (CMC > 0.70). The device has been shown to be capable of implementing complex loading patterns similar to gait, and of capturing the compressive response of the plantar soft tissue for a range of loading conditions in vivo. Copyright © 2015. Published by Elsevier Ltd.
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.
ERIC Educational Resources Information Center
Kraemer, Helena Chmura; Thiemann, Sue
1989-01-01
Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…
Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR
NASA Astrophysics Data System (ADS)
Ma, Yongjun
The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.
NASA Astrophysics Data System (ADS)
Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.
2017-02-01
The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattarivand, Mike; Summers, Clare; Robar, James
Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients basedmore » on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.« less
Alternative stitching method for massively parallel e-beam lithography
NASA Astrophysics Data System (ADS)
Brandt, Pieter; Tranquillin, Céline; Wieland, Marco; Bayle, Sébastien; Milléquant, Matthieu; Renault, Guillaume
2015-03-01
In this study a novel stitching method other than Soft Edge (SE) and Smart Boundary (SB) is introduced and benchmarked against SE. The method is based on locally enhanced Exposure Latitude without cost of throughput, making use of the fact that the two beams that pass through the stitching region can deposit up to 2x the nominal dose. The method requires a complex Proximity Effect Correction that takes a preset stitching dose profile into account. On a Metal clip at minimum half-pitch of 32 nm for MAPPER FLX 1200 tool specifications, the novel stitching method effectively mitigates Beam to Beam (B2B) position errors such that they do not induce increase in CD Uniformity (CDU). In other words, the same CDU can be realized inside the stitching region as outside the stitching region. For the SE method, the CDU inside is 0.3 nm higher than outside the stitching region. 5 nm direct overlay impact from B2B position errors cannot be reduced by a stitching strategy.
Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.
Felt, Wyatt; Chin, Khai Yi; Remy, C David
2017-09-01
This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.
Adaptive and Resilient Soft Tensegrity Robots.
Rieffel, John; Mouret, Jean-Baptiste
2018-04-17
Living organisms intertwine soft (e.g., muscle) and hard (e.g., bones) materials, giving them an intrinsic flexibility and resiliency often lacking in conventional rigid robots. The emerging field of soft robotics seeks to harness these same properties to create resilient machines. The nature of soft materials, however, presents considerable challenges to aspects of design, construction, and control-and up until now, the vast majority of gaits for soft robots have been hand-designed through empirical trial-and-error. This article describes an easy-to-assemble tensegrity-based soft robot capable of highly dynamic locomotive gaits and demonstrating structural and behavioral resilience in the face of physical damage. Enabling this is the use of a machine learning algorithm able to discover effective gaits with a minimal number of physical trials. These results lend further credence to soft-robotic approaches that seek to harness the interaction of complex material dynamics to generate a wealth of dynamical behaviors.
Pauné, J; Queiros, A; Quevedo, L; Neves, H; Lopes-Ferreira, D; González-Méijome, J M
2014-12-01
To evaluate the performance of two experimental contact lenses (CL) designed to induce relative peripheral myopic defocus in myopic eyes. Ten right eyes of 10 subjects were fitted with three different CL: a soft experimental lens (ExpSCL), a rigid gas permeable experimental lens (ExpRGP) and a standard RGP lens made of the same material (StdRGP). Central and peripheral refraction was measured using a Grand Seiko open-field autorefractometer across the central 60° of the horizontal visual field. Ocular aberrations were measured with a Hartman-Shack aberrometer, and monocular contrast sensitivity function (CSF) was measured with a VCTS6500 without and with the three contact lenses. Both experimental lenses were able to increase significantly the relative peripheral myopic defocus up to -0.50 D in the nasal field and -1.00 D in the temporal field (p<0.05). The ExpRGP induced a significantly higher myopic defocus in the temporal field compared to the ExpSCL. ExpSCL induced significantly lower levels of Spherical-like HOA than ExpRGP for the 5mm pupil size (p<0.05). Both experimental lenses kept CSF within normal limits without any statistically significant change from baseline (p>0.05). RGP lens design seems to be more effective to induce a significant myopic change in the relative peripheral refractive error. Both lenses preserve a good visual performance. The worsened optical quality observed in ExpRGP was due to an increased coma-like and spherical-like HOA. However, no impact on the visual quality as measured by CSF was observed. Copyright © 2014 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement
Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian
2013-01-01
Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990
Accuracy study of a robotic system for MRI-guided prostate needle placement.
Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian
2013-09-01
Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
Khurana, Harpreet Kaur; Cho, Il Kyu; Shim, Jae Yong; Li, Qing X; Jun, Soojin
2008-02-13
Aspartame is a low-calorie sweetener commonly used in soft drinks; however, the maximum usage dose is limited by the U.S. Food and Drug Administration. Fourier transform infrared (FTIR) spectroscopy with attenuated total reflectance sampling accessory and partial least-squares regression (PLS) was used for rapid determination of aspartame in soft drinks. On the basis of spectral characterization, the highest R2 value, and lowest PRESS value, the spectral region between 1600 and 1900 cm(-1) was selected for quantitative estimation of aspartame. The potential of FTIR spectroscopy for aspartame quantification was examined and validated by the conventional HPLC method. Using the FTIR method, aspartame contents in four selected carbonated diet soft drinks were found to average from 0.43 to 0.50 mg/mL with prediction errors ranging from 2.4 to 5.7% when compared with HPLC measurements. The developed method also showed a high degree of accuracy because real samples were used for calibration, thus minimizing potential interference errors. The FTIR method developed can be suitably used for routine quality control analysis of aspartame in the beverage-manufacturing sector.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
Estimating patient-specific soft-tissue properties in a TKA knee.
Ewing, Joseph A; Kaufman, Michelle K; Hutter, Erin E; Granger, Jeffrey F; Beal, Matthew D; Piazza, Stephen J; Siston, Robert A
2016-03-01
Surgical technique is one factor that has been identified as critical to success of total knee arthroplasty. Researchers have shown that computer simulations can aid in determining how decisions in the operating room generally affect post-operative outcomes. However, to use simulations to make clinically relevant predictions about knee forces and motions for a specific total knee patient, patient-specific models are needed. This study introduces a methodology for estimating knee soft-tissue properties of an individual total knee patient. A custom surgical navigation system and stability device were used to measure the force-displacement relationship of the knee. Soft-tissue properties were estimated using a parameter optimization that matched simulated tibiofemoral kinematics with experimental tibiofemoral kinematics. Simulations using optimized ligament properties had an average root mean square error of 3.5° across all tests while simulations using generic ligament properties taken from literature had an average root mean square error of 8.4°. Specimens showed large variability among ligament properties regardless of similarities in prosthetic component alignment and measured knee laxity. These results demonstrate the importance of soft-tissue properties in determining knee stability, and suggest that to make clinically relevant predictions of post-operative knee motions and forces using computer simulations, patient-specific soft-tissue properties are needed. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu
2008-10-01
The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.
Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Bryant, Larry
2014-01-01
Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.
Cutti, Andrea Giovanni; Cappello, Angelo; Davalli, Angelo
2006-01-01
Soft tissue artefact is the dominant error source for upper extremity motion analyses that use skin-mounted markers, especially in humeral axial rotation. A new in vivo technique is presented that is based on the definition of a humerus bone-embedded frame almost "artefact free" but influenced by the elbow orientation in the measurement of the humeral axial rotation, and on an algorithm designed to solve this kinematic coupling. The technique was validated in vivo in a study of six healthy subjects who performed five arm-movement tasks. For each task the similarity between a gold standard pattern and the axial rotation pattern before and after the application of the compensation algorithm was evaluated in terms of explained variance, gain, phase and offset. In addition the root mean square error between the patterns was used as a global similarity estimator. After the application, for four out of five tasks, patterns were highly correlated, in phase, with almost equal gain and limited offset; the root mean square error decreased from the original 9 degrees to 3 degrees . The proposed technique appears to help compensate for the soft tissue artefact affecting axial rotation. A further development is also proposed to make the technique effective also for the pure prono-supination task.
In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function
NASA Astrophysics Data System (ADS)
Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir
2018-03-01
We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.
Eisner, Brian H; Kambadakone, Avinash; Monga, Manoj; Anderson, James K; Thoreson, Andrew A; Lee, Hang; Dretler, Stephen P; Sahani, Dushyant V
2009-04-01
We determined the most accurate method of measuring urinary stones on computerized tomography. For the in vitro portion of the study 24 calculi, including 12 calcium oxalate monohydrate and 12 uric acid stones, that had been previously collected at our clinic were measured manually with hand calipers as the gold standard measurement. The calculi were then embedded into human kidney-sized potatoes and scanned using 64-slice multidetector computerized tomography. Computerized tomography measurements were performed at 4 window settings, including standard soft tissue windows (window width-320 and window length-50), standard bone windows (window width-1120 and window length-300), 5.13x magnified soft tissue windows and 5.13x magnified bone windows. Maximum stone dimensions were recorded. For the in vivo portion of the study 41 patients with distal ureteral stones who underwent noncontrast computerized tomography and subsequently spontaneously passed the stones were analyzed. All analyzed stones were 100% calcium oxalate monohydrate or mixed, calcium based stones. Stones were prospectively collected at the clinic and the largest diameter was measured with digital calipers as the gold standard. This was compared to computerized tomography measurements using 4.0x magnified soft tissue windows and 4.0x magnified bone windows. Statistical comparisons were performed using Pearson's correlation and paired t test. In the in vitro portion of the study the most accurate measurements were obtained using 5.13x magnified bone windows with a mean 0.13 mm difference from caliper measurement (p = 0.6). Measurements performed in the soft tissue window with and without magnification, and in the bone window without magnification were significantly different from hand caliper measurements (mean difference 1.2, 1.9 and 1.4 mm, p = 0.003, <0.001 and 0.0002, respectively). When comparing measurement errors between stones of different composition in vitro, the error for calcium oxalate calculi was significantly different from the gold standard for all methods except bone window settings with magnification. For uric acid calculi the measurement error was observed only in standard soft tissue window settings. In vivo 4.0x magnified bone windows was superior to 4.0x magnified soft tissue windows in measurement accuracy. Magnified bone window measurements were not statistically different from digital caliper measurements (mean underestimation vs digital caliper 0.3 mm, p = 0.4), while magnified soft tissue windows were statistically distinct (mean underestimation 1.4 mm, p = 0.001). In this study magnified bone windows were the most accurate method of stone measurements in vitro and in vivo. Therefore, we recommend the routine use of magnified bone windows for computerized tomography measurement of stones. In vitro the measurement error in calcium oxalate stones was greater than that in uric acid stones, suggesting that stone composition may be responsible for measurement inaccuracies.
Scaled CMOS Technology Reliability Users Guide
NASA Technical Reports Server (NTRS)
White, Mark
2010-01-01
The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.
NASA Astrophysics Data System (ADS)
Xia, Yi
Fractures and associated bone fragility induced by osteoporosis and osteopenia are widespread health threat to current society. Early detection of fracture risk associated with bone quantity and quality is important for both the prevention and treatment of osteoporosis and consequent complications. Quantitative ultrasound (QUS) is an engineering technology for monitoring bone quantity and quality of humans on earth and astronauts subjected to long duration microgravity. Factors currently limiting the acceptance of QUS technology involve precision, accuracy, single index and standardization. The objective of this study was to improve the accuracy and precision of an image-based QUS technique for non-invasive evaluation of trabecular bone quantity and quality by developing new techniques and understanding ultrasound/tissue interaction. Several new techniques have been developed in this dissertation study, including the automatic identification of irregular region of interest (iROI) in bone, surface topology mapping (STM) and mean scattering spacing (MSS) estimation for evaluating trabecular bone structure. In vitro results have shown that (1) the inter- and intra-observer errors in QUS measurement were reduced two to five fold by iROI compared to previous results; (2) the accuracy of QUS parameter, e.g., ultrasound velocity (UV) through bone, was improved 16% by STM; and (3) the averaged trabecular spacing can be estimated by MSS technique (r2=0.72, p<0.01). The measurement errors of BUA and UV introduced by the soft tissue and cortical shells in vivo can be quantified by developed foot model and simplified cortical-trabecular-cortical sandwich model, which were verified by the experimental results. The mechanisms of the errors induced by the cortical and soft tissues were revealed by the model. With developed new techniques and understanding of sound-tissue interaction, in vivo clinical trail and bed rest study were preformed to evaluate the performance of QUS in clinical applications. It has been demonstrated that the QUS has similar performance for in vivo bone density measurement compared to current gold-standard method, i.e., DXA, while additional information are obtained by the QUS for predicting fracture risk by monitoring of bone's quality. The developed QUS imaging technique can be used to assess bone's quantity and quality with improved accuracy and precision.
Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.
Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C
2013-12-30
We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.
2013-01-01
Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411
Moore, Stephanie N; Hawley, Gregory D; Smith, Emily N; Mignemi, Nicholas A; Ihejirika, Rivka C; Yuasa, Masato; Cates, Justin M M; Liu, Xulei; Schoenecker, Jonathan G
2016-01-01
Soft tissue calcification, including both dystrophic calcification and heterotopic ossification, may occur following injury. These lesions have variable fates as they are either resorbed or persist. Persistent soft tissue calcification may result in chronic inflammation and/or loss of function of that soft tissue. The molecular mechanisms that result in the development and maturation of calcifications are uncertain. As a result, directed therapies that prevent or resorb soft tissue calcifications remain largely unsuccessful. Animal models of post-traumatic soft tissue calcification that allow for cost-effective, serial analysis of an individual animal over time are necessary to derive and test novel therapies. We have determined that a cardiotoxin-induced injury of the muscles in the posterior compartment of the lower extremity represents a useful model in which soft tissue calcification develops remote from adjacent bones, thereby allowing for serial analysis by plain radiography. The purpose of the study was to design and validate a method for quantifying soft tissue calcifications in mice longitudinally using plain radiographic techniques and an ordinal scoring system. Muscle injury was induced by injecting cardiotoxin into the posterior compartment of the lower extremity in mice susceptible to developing soft tissue calcification. Seven days following injury, radiographs were obtained under anesthesia. Multiple researchers applied methods designed to standardize post-image processing of digital radiographs (N = 4) and quantify soft tissue calcification (N = 6) in these images using an ordinal scoring system. Inter- and intra-observer agreement for both post-image processing and the scoring system used was assessed using weighted kappa statistics. Soft tissue calcification quantifications by the ordinal scale were compared to mineral volume measurements (threshold 450.7mgHA/cm3) determined by μCT. Finally, sample-size calculations necessary to discriminate between a 25%, 50%, 75%, and 100% difference in STiCSS score 7 days following burn/CTX induced muscle injury were determined. Precision analysis demonstrated substantial to good agreement for both post-image processing (κ = 0.73 to 0.90) and scoring (κ = 0.88 to 0.93), with low inter- and intra-observer variability. Additionally, there was a strong correlation in quantification of soft tissue calcification between the ordinal system and by mineral volume quantification by μCT (Spearman r = 0.83 to 0.89). The ordinal scoring system reliably quantified soft tissue calcification in a burn/CTX-induced soft tissue calcification model compared to non-injured controls (Mann-Whitney rank test: P = 0.0002, ***). Sample size calculations revealed that 6 mice per group would be required to detect a 50% difference in STiCSS score with a power of 0.8. Finally, the STiCSS was demonstrated to reliably quantify soft tissue calcification [dystrophic calcification and heterotopic ossification] by radiographic analysis, independent of the histopathological state of the mineralization. Radiographic analysis can discriminate muscle injury-induced soft tissue calcification from adjacent bone and follow its clinical course over time without requiring the sacrifice of the animal. While the STiCSS cannot identify the specific type of soft tissue calcification present, it is still a useful and valid method by which to quantify the degree of soft tissue calcification. This methodology allows for longitudinal measurements of soft tissue calcification in a single animal, which is relatively less expensive, less time-consuming, and exposes the animal to less radiation than in vivo μCT. Therefore, this high-throughput, longitudinal analytic method for quantifying soft tissue calcification is a viable alternative for the study of soft tissue calcification.
Moore, Stephanie N.; Hawley, Gregory D.; Smith, Emily N.; Mignemi, Nicholas A.; Ihejirika, Rivka C.; Yuasa, Masato; Cates, Justin M. M.; Liu, Xulei; Schoenecker, Jonathan G.
2016-01-01
Introduction Soft tissue calcification, including both dystrophic calcification and heterotopic ossification, may occur following injury. These lesions have variable fates as they are either resorbed or persist. Persistent soft tissue calcification may result in chronic inflammation and/or loss of function of that soft tissue. The molecular mechanisms that result in the development and maturation of calcifications are uncertain. As a result, directed therapies that prevent or resorb soft tissue calcifications remain largely unsuccessful. Animal models of post-traumatic soft tissue calcification that allow for cost-effective, serial analysis of an individual animal over time are necessary to derive and test novel therapies. We have determined that a cardiotoxin-induced injury of the muscles in the posterior compartment of the lower extremity represents a useful model in which soft tissue calcification develops remote from adjacent bones, thereby allowing for serial analysis by plain radiography. The purpose of the study was to design and validate a method for quantifying soft tissue calcifications in mice longitudinally using plain radiographic techniques and an ordinal scoring system. Methods Muscle injury was induced by injecting cardiotoxin into the posterior compartment of the lower extremity in mice susceptible to developing soft tissue calcification. Seven days following injury, radiographs were obtained under anesthesia. Multiple researchers applied methods designed to standardize post-image processing of digital radiographs (N = 4) and quantify soft tissue calcification (N = 6) in these images using an ordinal scoring system. Inter- and intra-observer agreement for both post-image processing and the scoring system used was assessed using weighted kappa statistics. Soft tissue calcification quantifications by the ordinal scale were compared to mineral volume measurements (threshold 450.7mgHA/cm3) determined by μCT. Finally, sample-size calculations necessary to discriminate between a 25%, 50%, 75%, and 100% difference in STiCSS score 7 days following burn/CTX induced muscle injury were determined. Results Precision analysis demonstrated substantial to good agreement for both post-image processing (κ = 0.73 to 0.90) and scoring (κ = 0.88 to 0.93), with low inter- and intra-observer variability. Additionally, there was a strong correlation in quantification of soft tissue calcification between the ordinal system and by mineral volume quantification by μCT (Spearman r = 0.83 to 0.89). The ordinal scoring system reliably quantified soft tissue calcification in a burn/CTX-induced soft tissue calcification model compared to non-injured controls (Mann-Whitney rank test: P = 0.0002, ***). Sample size calculations revealed that 6 mice per group would be required to detect a 50% difference in STiCSS score with a power of 0.8. Finally, the STiCSS was demonstrated to reliably quantify soft tissue calcification [dystrophic calcification and heterotopic ossification] by radiographic analysis, independent of the histopathological state of the mineralization. Conclusions Radiographic analysis can discriminate muscle injury-induced soft tissue calcification from adjacent bone and follow its clinical course over time without requiring the sacrifice of the animal. While the STiCSS cannot identify the specific type of soft tissue calcification present, it is still a useful and valid method by which to quantify the degree of soft tissue calcification. This methodology allows for longitudinal measurements of soft tissue calcification in a single animal, which is relatively less expensive, less time-consuming, and exposes the animal to less radiation than in vivo μCT. Therefore, this high-throughput, longitudinal analytic method for quantifying soft tissue calcification is a viable alternative for the study of soft tissue calcification. PMID:27438007
Dynamic soft tissue deformation estimation based on energy analysis
NASA Astrophysics Data System (ADS)
Gao, Dedong; Lei, Yong; Yao, Bin
2016-10-01
The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.
Irreversible metal-insulator transition in thin film VO2 induced by soft X-ray irradiation
NASA Astrophysics Data System (ADS)
Singh, V. R.; Jovic, V.; Valmianski, I.; Ramirez, J. G.; Lamoureux, B.; Schuller, Ivan K.; Smith, K. E.
2017-12-01
In this study, we show the ability of soft x-ray irradiation to induce room temperature metal-insulator transitions (MITs) in VO2 thin films grown on R-plane sapphire. The ability of soft x-rays to induce MIT in VO2 thin films is confirmed by photoemission spectroscopy and soft x-ray spectroscopy measurements. When irradiation was discontinued, the systems do not return to the insulating phase. Analysis of valence band photoemission spectra revealed that the density of states (DOSs) of the V 3d band increased with irradiation time, while the DOS of the O 2p band decreased. We use these results to propose a model in which the MIT is driven by oxygen desorption from thin films during irradiation.
Microcircuit radiation effects databank
NASA Technical Reports Server (NTRS)
1983-01-01
Radiation test data submitted by many testers is collated to serve as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. Total dose damage information and single event upset cross sections, i.e., the probability of a soft error (bit flip) or of a hard error (latchup) are presented.
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
Evidence for explosive chromospheric evaporation in a solar flare observed with SMM
NASA Technical Reports Server (NTRS)
Zarro, D. M.; Saba, J. L. R.; Strong, K. T.; Canfield, R. C.; Metcalf, T.
1986-01-01
SMM soft X-ray data and Sacramento Peak Observatory H-alpha observations are combined in a study of the impulsive phase of a solar flare. A blue asymmetry, indicative of upflow motions, was observed in the coronal Ca XIX line during the soft X-ray rise phase. H-alpha redshifts, indicative of downward motions, were observed simultaneously in bright flare kernels during the period of hard X-ray emission. It is shown that, to within observational errors, the impulsive phase momentum transported by the upflowing soft X-ray plasma is equivalent to that of the downward moving chromospheric material.
Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bautista-Gomez, Leonardo; Cappello, Franck
Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less
Proton irradiation effects on advanced digital and microwave III-V components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hash, G.L.; Schwank, J.R.; Shaneyfelt, M.R.
1994-09-01
A wide range of advanced III-V components suitable for use in high-speed satellite communication systems were evaluated for displacement damage and single-event effects in high-energy, high-fluence proton environments. Transistors and integrated circuits (both digital and MMIC) were irradiated with protons at energies from 41 to 197 MeV and at fluences from 10{sup 10} to 2 {times} 10{sup 14} protons/cm{sup 2}. Large soft-error rates were measured for digital GaAs MESFET (3 {times} 10{sup {minus}5} errors/bit-day) and heterojunction bipolar circuits (10{sup {minus}5} errors/bit-day). No transient signals were detected from MMIC circuits. The largest degradation in transistor response caused by displacement damage wasmore » observed for 1.0-{mu}m depletion- and enhancement-mode MESFET transistors. Shorter gate length MESFET transistors and HEMT transistors exhibited less displacement-induced damage. These results show that memory-intensive GaAs digital circuits may result in significant system degradation due to single-event upset in natural and man-made space environments. However, displacement damage effects should not be a limiting factor for fluence levels up to 10{sup 14} protons/cm{sup 2} [equivalent to total doses in excess of 10 Mrad(GaAs)].« less
Proton irradiation effects on advanced digital and microwave III-V components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hash, G.L.; Schwank, J.R.; Shaneyfelt, M.R.
1994-12-01
A wide range of advanced III-V components suitable for use in high-speed satellite communication systems were evaluated for displacement damage and single-event effects in high-energy, high-fluence proton environments. Transistors and integrated circuits (both digital and MMIC) were irradiated with protons at energies from 41 to 197 MeV and at fluences from 10[sup 10] to 2 [times] 10[sup 14] protons/cm[sup 2]. Large soft-error rates were measured for digital GaAs MESFET (3 [times] 10[sup [minus]5] errors/bit-day) and heterojunction bipolar circuits (10[sup [minus]5] errors/bit-day). No transient signals were detected from MMIC circuits. The largest degradation in transistor response caused by displacement damage wasmore » observed for 1.0-[mu]m depletion- and enhancement-mode MESFET transistors. Shorter gate length MESFET transistors and HEMT transistors exhibited less displacement-induced damage. These results show that memory-intensive GaAs digital circuits may result in significant system degradation due to single-event upset in natural and man-made space environments. However, displacement damage effects should not be a limiting factor for fluence levels up to 10[sup 14] protons/cm[sup 2] [equivalent to total doses in excess of 10 Mrad (GaAs)].« less
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Management Aspects of Software Maintenance.
1984-09-01
educated in * the complex nature of software maintenance to be able to properly evaluate and manage the software maintenance effort. In this...maintenance and improvement may be called "software evolution". The soft- ware manager must be Educated in the complex nature cf soft- Iware maintenance to be...complaint of error or request for modification is also studied in order to determine what action needs tc be taken. 2. Define Objective and Approach :
Stephan, Carl N; Simpson, Ellie K
2008-11-01
With the ever increasing production of average soft tissue depth studies, data are becoming increasingly complex, less standardized, and more unwieldy. So far, no overarching review has been attempted to determine: the validity of continued data collection; the usefulness of the existing data subcategorizations; or if a synthesis is possible to produce a manageable soft tissue depth library. While a principal components analysis would provide the best foundation for such an assessment, this type of investigation is not currently possible because of a lack of easily accessible raw data (first, many studies are narrow; second, raw data are infrequently published and/or stored and are not always shared by some authors). This paper provides an alternate means of investigation using an hierarchical approach to review and compare the effects of single variables on published mean values for adults whilst acknowledging measurement errors and within-group variation. The results revealed: (i) no clear secular trends at frequently investigated landmarks; (ii) wide variation in soft tissue depth measures between different measurement techniques irrespective of whether living persons or cadavers were considered; (iii) no clear clustering of non-Caucasoid data far from the Caucasoid means; and (iv) minor differences between males and females. Consequently, the data were pooled across studies using weighted means and standard deviations to cancel out random and opposing study-specific errors, and to produce a single soft tissue depth table with increased sample sizes (e.g., 6786 individuals at pogonion).
NASA Astrophysics Data System (ADS)
Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar
2012-12-01
Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.
Chang, Shih-Tsun; Liu, Yen-Hsiu; Lee, Jiahn-Shing; See, Lai-Chu
2015-09-01
The effect of correcting static vision on sports vision is still not clear. To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV],) were different among soft tennis adolescent athletes with normal vision (Group A), with refractive error and corrected with (Group B) and without eyeglasses (Group C). A cross-section study was conducted. Soft tennis athletes aged 10-13 who played softball tennis for 2-5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. DPs were measured in an absolute deviation (mm) between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s) using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse) to 10 (best) using ATHLEVISION software. Chi-square test and Kruskal-Wallis test was used to compare the data among the three study groups. A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C) were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021). PV displayed significant difference among the three study groups (P = 0.0044). There was no significant difference in DVA, EM, and MV among the three study groups. Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups.
Ultrasound Imaging in Radiation Therapy: From Interfractional to Intrafractional Guidance
Western, Craig; Hristov, Dimitre
2015-01-01
External beam radiation therapy (EBRT) is included in the treatment regimen of the majority of cancer patients. With the proliferation of hypofractionated radiotherapy treatment regimens, such as stereotactic body radiation therapy (SBRT), interfractional and intrafractional imaging technologies are becoming increasingly critical to ensure safe and effective treatment delivery. Ultrasound (US)-based image guidance systems offer real-time, markerless, volumetric imaging with excellent soft tissue contrast, overcoming the limitations of traditional X-ray or computed tomography (CT)-based guidance for abdominal and pelvic cancer sites, such as the liver and prostate. Interfractional US guidance systems have been commercially adopted for patient positioning but suffer from systematic positioning errors induced by probe pressure. More recently, several research groups have introduced concepts for intrafractional US guidance systems leveraging robotic probe placement technology and real-time soft tissue tracking software. This paper reviews various commercial and research-level US guidance systems used in radiation therapy, with an emphasis on hardware and software technologies that enable the deployment of US imaging within the radiotherapy environment and workflow. Previously unpublished material on tissue tracking systems and robotic probe manipulators under development by our group is also included. PMID:26180704
Balasuriya, Lilanthi; Vyles, David; Bakerman, Paul; Holton, Vanessa; Vaidya, Vinay; Garcia-Filion, Pamela; Westdorp, Joan; Sanchez, Christine; Kurz, Rhonda
2017-09-01
An enhanced dose range checking (DRC) system was developed to evaluate prescription error rates in the pediatric intensive care unit and the pediatric cardiovascular intensive care unit. An enhanced DRC system incorporating "soft" and "hard" alerts was designed and implemented. Practitioner responses to alerts for patients admitted to the pediatric intensive care unit and the pediatric cardiovascular intensive care unit were retrospectively reviewed. Alert rates increased from 0.3% to 3.4% after "go-live" (P < 0.001). Before go-live, all alerts were soft alerts. In the period after go-live, 68% of alerts were soft alerts and 32% were hard alerts. Before go-live, providers reduced doses only 1 time for every 10 dose alerts. After implementation of the enhanced computerized physician order entry system, the practitioners responded to soft alerts by reducing doses to more appropriate levels in 24.7% of orders (70/283), compared with 10% (3/30) before go-live (P = 0.0701). The practitioners deleted orders in 9.5% of cases (27/283) after implementation of the enhanced DRC system, as compared with no cancelled orders before go-live (P = 0.0774). Medication orders that triggered a soft alert were submitted unmodified in 65.7% (186/283) as compared with 90% (27/30) of orders before go-live (P = 0.0067). After go-live, 28.7% of hard alerts resulted in a reduced dose, 64% resulted in a cancelled order, and 7.4% were submitted as written. Before go-live, alerts were often clinically irrelevant. After go-live, there was a statistically significant decrease in orders that were submitted unmodified and an increase in the number of orders that were reduced or cancelled.
Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares
Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai
2013-01-01
Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923
Auxiliary variables for numerically solving nonlinear equations with softly broken symmetries.
Olum, Ken D; Masoumi, Ali
2017-06-01
General methods for solving simultaneous nonlinear equations work by generating a sequence of approximate solutions that successively improve a measure of the total error. However, if the total error function has a narrow curved valley, the available techniques tend to find the solution after a very large number of steps, if ever. The solver first converges rapidly to the valley, but once there it converges extremely slowly to the solution. In this paper we show that in the specific physically important case where these valleys are the result of a softly broken symmetry, the solution can often be found much more quickly by adding the generators of the softly broken symmetry as auxiliary variables. This makes the number of variables more than the equations and hence there will be a family of solutions, any one of which would be acceptable. We present a procedure for finding solutions in this case and apply it to several simple examples and an important problem in the physics of false vacuum decay. We also provide a Mathematica package that implements Powell's hybrid method with the generalization to allow more variables than equations.
In-flight performance of pulse-processing system of the ASTRO-H/Hitomi soft x-ray spectrometer
NASA Astrophysics Data System (ADS)
Ishisaki, Yoshitaka; Yamada, Shinya; Seta, Hiromi; Tashiro, Makoto S.; Takeda, Sawako; Terada, Yukikatsu; Kato, Yuka; Tsujimoto, Masahiro; Koyama, Shu; Mitsuda, Kazuhisa; Sawada, Makoto; Boyce, Kevin R.; Chiao, Meng P.; Watanabe, Tomomi; Leutenegger, Maurice A.; Eckart, Megan E.; Porter, Frederick Scott; Kilbourne, Caroline Anne
2018-01-01
We summarize results of the initial in-orbit performance of the pulse shape processor (PSP) of the soft x-ray spectrometer instrument onboard ASTRO-H (Hitomi). Event formats, kind of telemetry, and the pulse-processing parameters are described, and the parameter settings in orbit are listed. The PSP was powered-on 2 days after launch, and the event threshold was lowered in orbit. The PSP worked fine in orbit, and there was neither memory error nor SpaceWire communication error until the break-up of spacecraft. Time assignment, electrical crosstalk, and the event screening criteria are studied. It is confirmed that the event processing rate at 100% central processing unit load is ˜200 c / s / array, compliant with the requirement on the PSP.
Bronson, N R
1984-05-01
A new A-mode biometry system for determining axial length measurements of the eye has been developed that incorporates a soft-membrane transducer. The soft transducer decreases the risk of indenting the cornea with the probe resulting in inaccurate measurements. A microprocessor evaluates echo patterns and determines whether or not axial alignment has been obtained, eliminating possible user error. The new A-scan requires minimal user skill and can be used successfully by both physician and technician.
Awareness of technology-induced errors and processes for identifying and preventing such errors.
Bellwood, Paule; Borycki, Elizabeth M; Kushniruk, Andre W
2015-01-01
There is a need to determine if organizations working with health information technology are aware of technology-induced errors and how they are addressing and preventing them. The purpose of this study was to: a) determine the degree of technology-induced error awareness in various Canadian healthcare organizations, and b) identify those processes and procedures that are currently in place to help address, manage, and prevent technology-induced errors. We identified a lack of technology-induced error awareness among participants. Participants identified there was a lack of well-defined procedures in place for reporting technology-induced errors, addressing them when they arise, and preventing them.
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlensinger, Adam M.
2012-01-01
Modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more overhead through noisier channels, and software-defined radios use error-correction techniques that approach Shannon s theoretical limit of performance. The authors describe the benefit of closed-loop measurements for a receiver when paired with a counterpart transmitter and representative channel conditions. We also describe a real-time Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in real-time during the development of software defined radios.
De Rosario, Helios; Page, Álvaro; Besa, Antonio
2017-09-06
The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.
Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B
2018-02-01
Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared with current MR imaging-based AC approaches. © RSNA, 2017 Online supplemental material is available for this article.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Real-time optimal guidance for orbital maneuvering.
NASA Technical Reports Server (NTRS)
Cohen, A. O.; Brown, K. R.
1973-01-01
A new formulation for soft-constraint trajectory optimization is presented as a real-time optimal feedback guidance method for multiburn orbital maneuvers. Control is always chosen to minimize burn time plus a quadratic penalty for end condition errors, weighted so that early in the mission (when controllability is greatest) terminal errors are held negligible. Eventually, as controllability diminishes, the method partially relaxes but effectively still compensates perturbations in whatever subspace remains controllable. Although the soft-constraint concept is well-known in optimal control, the present formulation is novel in addressing the loss of controllability inherent in multiple burn orbital maneuvers. Moreover the necessary conditions usually obtained from a Bolza formulation are modified in this case so that the fully hard constraint formulation is a numerically well behaved subcase. As a result convergence properties have been greatly improved.
Lu, Minhua; Huang, Shuai; Yang, Xianglong; Yang, Lei; Mao, Rui
2017-01-01
Fluid-jet-based indentation is used as a noncontact excitation technique by systems measuring the mechanical properties of soft tissues. However, the application of these devices has been hindered by the lack of theoretical solutions. This study developed a mathematical model for testing the indentation induced by a fluid jet and determined a semianalytical solution. The soft tissue was modeled as an elastic layer bonded to a rigid base. The pressure of the fluid jet impinging on the soft tissue was assumed to have a power-form function. The semianalytical solution was verified in detail using finite-element modeling, with excellent agreement being achieved. The effects of several parameters on the solution behaviors are reported, and a method for applying the solution to determine the mechanical properties of soft tissues is suggested.
Magnetically Assisted Bilayer Composites for Soft Bending Actuators.
Jang, Sung-Hwan; Na, Seon-Hong; Park, Yong-Lae
2017-06-12
This article presents a soft pneumatic bending actuator using a magnetically assisted bilayer composite composed of silicone polymer and ferromagnetic particles. Bilayer composites were fabricated by mixing ferromagnetic particles to a prepolymer state of silicone in a mold and asymmetrically distributed them by applying a strong non-uniform magnetic field to one side of the mold during the curing process. The biased magnetic field induces sedimentation of the ferromagnetic particles toward one side of the structure. The nonhomogeneous distribution of the particles induces bending of the structure when inflated, as a result of asymmetric stiffness of the composite. The bilayer composites were then characterized with a scanning electron microscopy and thermogravimetric analysis. The bending performance and the axial expansion of the actuator were discussed for manipulation applications in soft robotics and bioengineering. The magnetically assisted manufacturing process for the soft bending actuator is a promising technique for various applications in soft robotics.
Magnetically Assisted Bilayer Composites for Soft Bending Actuators
Jang, Sung-Hwan; Na, Seon-Hong; Park, Yong-Lae
2017-01-01
This article presents a soft pneumatic bending actuator using a magnetically assisted bilayer composite composed of silicone polymer and ferromagnetic particles. Bilayer composites were fabricated by mixing ferromagnetic particles to a prepolymer state of silicone in a mold and asymmetrically distributed them by applying a strong non-uniform magnetic field to one side of the mold during the curing process. The biased magnetic field induces sedimentation of the ferromagnetic particles toward one side of the structure. The nonhomogeneous distribution of the particles induces bending of the structure when inflated, as a result of asymmetric stiffness of the composite. The bilayer composites were then characterized with a scanning electron microscopy and thermogravimetric analysis. The bending performance and the axial expansion of the actuator were discussed for manipulation applications in soft robotics and bioengineering. The magnetically assisted manufacturing process for the soft bending actuator is a promising technique for various applications in soft robotics. PMID:28773007
TID and SEE Response of an Advanced Samsung 4G NAND Flash Memory
NASA Technical Reports Server (NTRS)
Oldham, Timothy R.; Friendlich, M.; Howard, J. W.; Berg, M. D.; Kim, H. S.; Irwin, T. L.; LaBel, K. A.
2007-01-01
Initial total ionizing dose (TID) and single event heavy ion test results are presented for an unhardened commercial flash memory, fabricated with 63 nm technology. Results are that the parts survive to a TID of nearly 200 krad (SiO2), with a tractable soft error rate of about 10(exp -l2) errors/bit-day, for the Adams Ten Percent Worst Case Environment.
An alternative data filling approach for prediction of missing data in soft sets (ADFIS).
Sadiq Khan, Muhammad; Al-Garadi, Mohammed Ali; Wahab, Ainuddin Wahid Abdul; Herawan, Tutut
2016-01-01
Soft set theory is a mathematical approach that provides solution for dealing with uncertain data. As a standard soft set, it can be represented as a Boolean-valued information system, and hence it has been used in hundreds of useful applications. Meanwhile, these applications become worthless if the Boolean information system contains missing data due to error, security or mishandling. Few researches exist that focused on handling partially incomplete soft set and none of them has high accuracy rate in prediction performance of handling missing data. It is shown that the data filling approach for incomplete soft set (DFIS) has the best performance among all previous approaches. However, in reviewing DFIS, accuracy is still its main problem. In this paper, we propose an alternative data filling approach for prediction of missing data in soft sets, namely ADFIS. The novelty of ADFIS is that, unlike the previous approach that used probability, we focus more on reliability of association among parameters in soft set. Experimental results on small, 04 UCI benchmark data and causality workbench lung cancer (LUCAP2) data shows that ADFIS performs better accuracy as compared to DFIS.
Fu, Weijie; Wang, Xi; Liu, Yu
2015-01-01
Previous studies have not used neurophysiological methodology to explore the damping effects on induced soft-tissue vibrations and muscle responses. This study aimed to investigate the changes in activation of the musculoskeletal system in response to soft-tissue vibrations with different applied compression conditions in a drop-jump landing task. Twelve trained male participants were instructed to perform drop-jump landings in compression shorts (CS) and regular shorts without compression (control condition, CC). Soft-tissue vibrations and EMG amplitudes of the leg within 50 ms before and after touchdown were collected synchronously. Peak acceleration of the thigh muscles was significantly lower in CS than in CC during landings from 45 or 60 cm and 30 cm heights (p < 0.05), respectively. However, the damping coefficient was higher in CS than in CC at the thigh muscles during landings from 60 cm height (p < 0.05). Significant decrease in EMG amplitude of the rectus femoris and biceps femoris muscles was also observed in CS (p < 0.05). Externally induced soft-tissue vibration damping was associated with a decrease in muscular activity of the rectus femoris and biceps femoris muscles during drop-jump landings from different heights.
Improved Rubin-Bodner Model for the Prediction of Soft Tissue Deformations
Zhang, Guangming; Xia, James J.; Liebschner, Michael; Zhang, Xiaoyan; Kim, Daeseung; Zhou, Xiaobo
2016-01-01
In craniomaxillofacial (CMF) surgery, a reliable way of simulating the soft tissue deformation resulted from skeletal reconstruction is vitally important for preventing the risks of facial distortion postoperatively. However, it is difficult to simulate the soft tissue behaviors affected by different types of CMF surgery. This study presents an integrated bio-mechanical and statistical learning model to improve accuracy and reliability of predictions on soft facial tissue behavior. The Rubin-Bodner (RB) model is initially used to describe the biomechanical behavior of the soft facial tissue. Subsequently, a finite element model (FEM) computers the stress of each node in soft facial tissue mesh data resulted from bone displacement. Next, the Generalized Regression Neural Network (GRNN) method is implemented to obtain the relationship between the facial soft tissue deformation and the stress distribution corresponding to different CMF surgical types and to improve evaluation of elastic parameters included in the RB model. Therefore, the soft facial tissue deformation can be predicted by biomechanical properties and statistical model. Leave-one-out cross-validation is used on eleven patients. As a result, the average prediction error of our model (0.7035mm) is lower than those resulting from other approaches. It also demonstrates that the more accurate bio-mechanical information the model has, the better prediction performance it could achieve. PMID:27717593
Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment
NASA Technical Reports Server (NTRS)
Hancock, Thomas M., III
1999-01-01
This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.
A survey on dielectric elastomer actuators for soft robots.
Gu, Guo-Ying; Zhu, Jian; Zhu, Li-Min; Zhu, Xiangyang
2017-01-23
Conventional industrial robots with the rigid actuation technology have made great progress for humans in the fields of automation assembly and manufacturing. With an increasing number of robots needing to interact with humans and unstructured environments, there is a need for soft robots capable of sustaining large deformation while inducing little pressure or damage when maneuvering through confined spaces. The emergence of soft robotics offers the prospect of applying soft actuators as artificial muscles in robots, replacing traditional rigid actuators. Dielectric elastomer actuators (DEAs) are recognized as one of the most promising soft actuation technologies due to the facts that: i) dielectric elastomers are kind of soft, motion-generating materials that resemble natural muscle of humans in terms of force, strain (displacement per unit length or area) and actuation pressure/density; ii) dielectric elastomers can produce large voltage-induced deformation. In this survey, we first introduce the so-called DEAs emphasizing the key points of working principle, key components and electromechanical modeling approaches. Then, different DEA-driven soft robots, including wearable/humanoid robots, walking/serpentine robots, flying robots and swimming robots, are reviewed. Lastly, we summarize the challenges and opportunities for the further studies in terms of mechanism design, dynamics modeling and autonomous control.
An undulator based soft x-ray source for microscopy on the Duke electron storage ring
NASA Astrophysics Data System (ADS)
Johnson, Lewis Elgin
1998-09-01
This dissertation describes the design, development, and installation of an undulator-based soft x-ray source on the Duke Free Electron Laser laboratory electron storage ring. Insertion device and soft x-ray beamline physics and technology are all discussed in detail. The Duke/NIST undulator is a 3.64-m long hybrid design constructed by the Brobeck Division of Maxwell Laboratories. Originally built for an FEL project at the National Institute of Standards and Technology, the undulator was acquired by Duke in 1992 for use as a soft x-ray source for the FEL laboratory. Initial Hall probe measurements on the magnetic field distribution of the undulator revealed field errors of more than 0.80%. Initial phase errors for the device were more than 11 degrees. Through a series of in situ and off-line measurements and modifications we have re-tuned the magnet field structure of the device to produce strong spectral characteristics through the 5th harmonic. A low operating K has served to reduce the effects of magnetic field errors on the harmonic spectral content. Although rms field errors remained at 0.75%, we succeeded in reducing phase errors to less than 5 degrees. Using trajectory simulations from magnetic field data, we have computed the spectral output given the interaction of the Duke storage ring electron beam and the NIST undulator. Driven by a series of concerns and constraints over maximum utility, personnel safety and funding, we have also constructed a unique front end beamline for the undulator. The front end has been designed for maximum throughput of the 1st harmonic around 40A in its standard mode of operation. The front end has an alternative mode of operation which transmits the 3rd and 5th harmonics. This compact system also allows for the extraction of some of the bend magnet produced synchrotron and transition radiation from the storage ring. As with any well designed front end system, it also provides excellent protection to personnel and to the storage ring. A diagnostic beamline consisting of a transmission grating spectrometer and scanning wire beam profile monitor was constructed to measure the spatial and spectral characteristics of the undulator radiation. Test of the system with a circulating electron beam has confirmed the magnetic and focusing properties of the undulator, and verified that it can be used without perturbing the orbit of the beam.
Chang, Shih-Tsun; Liu, Yen-Hsiu; Lee, Jiahn-Shing; See, Lai-Chu
2015-01-01
Background: The effect of correcting static vision on sports vision is still not clear. Aim: To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV],) were different among soft tennis adolescent athletes with normal vision (Group A), with refractive error and corrected with (Group B) and without eyeglasses (Group C). Setting and Design: A cross-section study was conducted. Soft tennis athletes aged 10–13 who played softball tennis for 2–5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. Materials and Methods: DPs were measured in an absolute deviation (mm) between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s) using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse) to 10 (best) using ATHLEVISION software. Statistical Analysis: Chi-square test and Kruskal–Wallis test was used to compare the data among the three study groups. Results: A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C) were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021). PV displayed significant difference among the three study groups (P = 0.0044). There was no significant difference in DVA, EM, and MV among the three study groups. Conclusions: Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups. PMID:26632127
Alternative stitching method for massively parallel e-beam lithography
NASA Astrophysics Data System (ADS)
Brandt, Pieter; Tranquillin, Céline; Wieland, Marco; Bayle, Sébastien; Milléquant, Matthieu; Renault, Guillaume
2015-07-01
In this study, a stitching method other than soft edge (SE) and smart boundary (SB) is introduced and benchmarked against SE. The method is based on locally enhanced exposure latitude without throughput cost, making use of the fact that the two beams that pass through the stitching region can deposit up to 2× the nominal dose. The method requires a complex proximity effect correction that takes a preset stitching dose profile into account. Although the principle of the presented stitching method can be multibeam (lithography) systems in general, in this study, the MAPPER FLX 1200 tool is specifically considered. For the latter tool at a metal clip at minimum half-pitch of 32 nm, the stitching method effectively mitigates beam-to-beam (B2B) position errors such that they do not induce an increase in critical dimension uniformity (CDU). In other words, the same CDU can be realized inside the stitching region as outside the stitching region. For the SE method, the CDU inside is 0.3 nm higher than outside the stitching region. A 5-nm direct overlay impact from the B2B position errors cannot be reduced by a stitching strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isom, H.C.; Mummaw, J.; Kreider, J.W.
1983-04-30
Guinea pig cells were malignantly transformed in vitro by ultraviolet (uv)-irradiated guinea pig cytomegalovirus (GPCMV). When guinea pig hepatocyte monolayers were infected with uv-irradiated GPCMV, three continuous epithelioid cell lines which grew in soft agarose were established. Two independently derived GPCMV-transformed liver cells and a cell line derived from a soft agarose clone of one of these lines induced invasive tumors when inoculated subcutaneously or intraperitoneally into nude mice. The tumors were sarcomas possibly derived from hepatic stroma or sinusoid. Transformed cell lines were also established after infection of guinea pig hepatocyte monolayers with human cytomegalovirus (HCMV) or simian virusmore » 40 (SV40). These cell lines also formed colonies in soft agarose and induced sarcomas in nude mice. It is concluded that (i) GPCMV can malignantly transform guinea pig cells; (ii) cloning of GPCMV-transformed cells in soft agarose produced cells that induced tumors with a shorter latency period but with no alteration in growth rate or final tumor size; and (iii) the tumors produced by GPCMV-and HCMV-transformed guinea pig cells were more similar to each other in growth rate than to those induced by SV40-transformed guinea pig cells.« less
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
Laser as a Tool to Study Radiation Effects in CMOS
NASA Astrophysics Data System (ADS)
Ajdari, Bahar
Energetic particles from cosmic ray or terrestrial sources can strike sensitive areas of CMOS devices and cause soft errors. Understanding the effects of such interactions is crucial as the device technology advances, and chip reliability has become more important than ever. Particle accelerator testing has been the standard method to characterize the sensitivity of chips to single event upsets (SEUs). However, because of their costs and availability limitations, other techniques have been explored. Pulsed laser has been a successful tool for characterization of SEU behavior, but to this day, laser has not been recognized as a comparable method to beam testing. In this thesis, I propose a methodology of correlating laser soft error rate (SER) to particle beam gathered data. Additionally, results are presented showing a temperature dependence of SER and the "neighbor effect" phenomenon where due to the close proximity of devices a "weakening effect" in the ON state can be observed.
Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori
2006-06-12
The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.
Distributed phased array architecture study
NASA Technical Reports Server (NTRS)
Bourgeois, Brian
1987-01-01
Variations in amplifiers and phase shifters can cause degraded antenna performance, depending also on the environmental conditions and antenna array architecture. The implementation of distributed phased array hardware was studied with the aid of the DISTAR computer program as a simulation tool. This simulation provides guidance in hardware simulation. Both hard and soft failures of the amplifiers in the T/R modules are modeled. Hard failures are catastrophic: no power is transmitted to the antenna elements. Noncatastrophic or soft failures are modeled as a modified Gaussian distribution. The resulting amplitude characteristics then determine the array excitation coefficients. The phase characteristics take on a uniform distribution. Pattern characteristics such as antenna gain, half power beamwidth, mainbeam phase errors, sidelobe levels, and beam pointing errors were studied as functions of amplifier and phase shifter variations. General specifications for amplifier and phase shifter tolerances in various architecture configurations for C band and S band were determined.
Effects of Stopping Ions and LET Fluctuations on Soft Error Rate Prediction.
Weeden-Wright, S. L.; King, Michael Patrick; Hooten, N. C.; ...
2015-02-01
Variability in energy deposition from stopping ions and LET fluctuations is quantified for specific radiation environments. When compared to predictions using average LET via CREME96, LET fluctuations lead to an order-of-magnitude difference in effective flux and a nearly 4x decrease in predicted soft error rate (SER) in an example calculation performed on a commercial 65 nm SRAM. The large LET fluctuations reported here will be even greater for the smaller sensitive volumes that are characteristic of highly scaled technologies. End-of-range effects of stopping ions do not lead to significant inaccuracies in radiation environments with low solar activity unless the sensitivevolumemore » thickness is 100 μm or greater. In contrast, end-of-range effects for stopping ions lead to significant inaccuracies for sensitive- volume thicknesses less than 10 μm in radiation environments with high solar activity.« less
NASA Astrophysics Data System (ADS)
Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng
2018-02-01
A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.
Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs
Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo; ...
2015-12-17
Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less
Damage level prediction of non-reshaped berm breakwater using ANN, SVM and ANFIS models
NASA Astrophysics Data System (ADS)
Mandal, Sukomal; Rao, Subba; N., Harish; Lokesha
2012-06-01
The damage analysis of coastal structure is very important as it involves many design parameters to be considered for the better and safe design of structure. In the present study experimental data for non-reshaped berm breakwater are collected from Marine Structures Laboratory, Department of Applied Mechanics and Hydraulics, NITK, Surathkal, India. Soft computing techniques like Artificial Neural Network (ANN), Support Vector Machine (SVM) and Adaptive Neuro Fuzzy Inference system (ANFIS) models are constructed using experimental data sets to predict the damage level of non-reshaped berm breakwater. The experimental data are used to train ANN, SVM and ANFIS models and results are determined in terms of statistical measures like mean square error, root mean square error, correla-tion coefficient and scatter index. The result shows that soft computing techniques i.e., ANN, SVM and ANFIS can be efficient tools in predicting damage levels of non reshaped berm breakwater.
Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo
Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less
NASA Astrophysics Data System (ADS)
Schramm, G.; Maus, J.; Hofheinz, F.; Petr, J.; Lougovski, A.; Beuthien-Baumann, B.; Platzek, I.; van den Hoff, J.
2014-06-01
The aim of this paper is to describe a new automatic method for compensation of metal-implant-induced segmentation errors in MR-based attenuation maps (MRMaps) and to evaluate the quantitative influence of those artifacts on the reconstructed PET activity concentration. The developed method uses a PET-based delineation of the patient contour to compensate metal-implant-caused signal voids in the MR scan that is segmented for PET attenuation correction. PET emission data of 13 patients with metal implants examined in a Philips Ingenuity PET/MR were reconstructed with the vendor-provided method for attenuation correction (MRMaporig, PETorig) and additionally with a method for attenuation correction (MRMapcor, PETcor) developed by our group. MRMaps produced by both methods were visually inspected for segmentation errors. The segmentation errors in MRMaporig were classified into four classes (L1 and L2 artifacts inside the lung and B1 and B2 artifacts inside the remaining body depending on the assigned attenuation coefficients). The average relative SUV differences (\\varepsilon _{rel}^{av}) between PETorig and PETcor of all regions showing wrong attenuation coefficients in MRMaporig were calculated. Additionally, relative SUVmean differences (ɛrel) of tracer accumulations in hot focal structures inside or in the vicinity of these regions were evaluated. MRMaporig showed erroneous attenuation coefficients inside the regions affected by metal artifacts and inside the patients' lung in all 13 cases. In MRMapcor, all regions with metal artifacts, except for the sternum, were filled with the soft-tissue attenuation coefficient and the lung was correctly segmented in all patients. MRMapcor only showed small residual segmentation errors in eight patients. \\varepsilon _{rel}^{av} (mean ± standard deviation) were: ( - 56 ± 3)% for B1, ( - 43 ± 4)% for B2, (21 ± 18)% for L1, (120 ± 47)% for L2 regions. ɛrel (mean ± standard deviation) of hot focal structures were: ( - 52 ± 12)% in B1, ( - 45 ± 13)% in B2, (19 ± 19)% in L1, (51 ± 31)% in L2 regions. Consequently, metal-implant-induced artifacts severely disturb MR-based attenuation correction and SUV quantification in PET/MR. The developed algorithm is able to compensate for these artifacts and improves SUV quantification accuracy distinctly.
Jia, Rui; Monk, Paul; Murray, David; Noble, J Alison; Mellon, Stephen
2017-09-06
Optoelectronic motion capture systems are widely employed to measure the movement of human joints. However, there can be a significant discrepancy between the data obtained by a motion capture system (MCS) and the actual movement of underlying bony structures, which is attributed to soft tissue artefact. In this paper, a computer-aided tracking and motion analysis with ultrasound (CAT & MAUS) system with an augmented globally optimal registration algorithm is presented to dynamically track the underlying bony structure during movement. The augmented registration part of CAT & MAUS was validated with a high system accuracy of 80%. The Euclidean distance between the marker-based bony landmark and the bony landmark tracked by CAT & MAUS was calculated to quantify the measurement error of an MCS caused by soft tissue artefact during movement. The average Euclidean distance between the target bony landmark measured by each of the CAT & MAUS system and the MCS alone varied from 8.32mm to 16.87mm in gait. This indicates the discrepancy between the MCS measured bony landmark and the actual underlying bony landmark. Moreover, Procrustes analysis was applied to demonstrate that CAT & MAUS reduces the deformation of the body segment shape modeled by markers during motion. The augmented CAT & MAUS system shows its potential to dynamically detect and locate actual underlying bony landmarks, which reduces the MCS measurement error caused by soft tissue artefact during movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Color reproduction for advanced manufacture of soft tissue prostheses.
Xiao, Kaida; Zardawi, Faraedon; van Noort, Richard; Yates, Julian M
2013-11-01
The objectives of this study were to develop a color reproduction system in advanced manufacture technology for accurate and automatic processing of soft tissue prostheses. The manufacturing protocol was defined to effectively and consistently produce soft tissue prostheses using a 3D printing system. Within this protocol printer color profiles were developed using a number of mathematical models for the proposed 3D color printing system based on 240 training colors. On this basis, the color reproduction system was established and their system errors including accuracy of color reproduction, performance of color repeatability and color gamut were evaluated using 14 known human skin shades. The printer color profile developed using the third-order polynomial regression based on least-square fitting provided the best model performance. The results demonstrated that by using the proposed color reproduction system, 14 different skin colors could be reproduced and excellent color reproduction performance achieved. Evaluation of the system's color repeatability revealed a demonstrable system error and this highlighted the need for regular evaluation. The color gamut for the proposed 3D printing system was simulated and it was demonstrated that the vast majority of skin colors can be reproduced with the exception of extreme dark or light skin color shades. This study demonstrated that the proposed color reproduction system can be effectively used to reproduce a range of human skin colors for application in advanced manufacture of soft tissue prostheses. Copyright © 2013 Elsevier Ltd. All rights reserved.
Peterman, Robert J; Jiang, Shuying; Johe, Rene; Mukherjee, Padma M
2016-12-01
Dolphin® visual treatment objective (VTO) prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging's VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Dolphin Imaging's software was determined to be accurate within an error range of +/- 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.
Spilker, R L; de Almeida, E S; Donzelli, P S
1992-01-01
This chapter addresses computationally demanding numerical formulations in the biomechanics of soft tissues. The theory of mixtures can be used to represent soft hydrated tissues in the human musculoskeletal system as a two-phase continuum consisting of an incompressible solid phase (collagen and proteoglycan) and an incompressible fluid phase (interstitial water). We first consider the finite deformation of soft hydrated tissues in which the solid phase is represented as hyperelastic. A finite element formulation of the governing nonlinear biphasic equations is presented based on a mixed-penalty approach and derived using the weighted residual method. Fluid and solid phase deformation, velocity, and pressure are interpolated within each element, and the pressure variables within each element are eliminated at the element level. A system of nonlinear, first-order differential equations in the fluid and solid phase deformation and velocity is obtained. In order to solve these equations, the contributions of the hyperelastic solid phase are incrementally linearized, a finite difference rule is introduced for temporal discretization, and an iterative scheme is adopted to achieve equilibrium at the end of each time increment. We demonstrate the accuracy and adequacy of the procedure using a six-node, isoparametric axisymmetric element, and we present an example problem for which independent numerical solution is available. Next, we present an automated, adaptive environment for the simulation of soft tissue continua in which the finite element analysis is coupled with automatic mesh generation, error indicators, and projection methods. Mesh generation and updating, including both refinement and coarsening, for the two-dimensional examples examined in this study are performed using the finite quadtree approach. The adaptive analysis is based on an error indicator which is the L2 norm of the difference between the finite element solution and a projected finite element solution. Total stress, calculated as the sum of the solid and fluid phase stresses, is used in the error indicator. To allow the finite difference algorithm to proceed in time using an updated mesh, solution values must be transferred to the new nodal locations. This rezoning is accomplished using a projected field for the primary variables. The accuracy and effectiveness of this adaptive finite element analysis is demonstrated using a linear, two-dimensional, axisymmetric problem corresponding to the indentation of a thin sheet of soft tissue. The method is shown to effectively capture the steep gradients and to produce solutions in good agreement with independent, converged, numerical solutions.
NASA Astrophysics Data System (ADS)
Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping
2016-11-01
A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.
Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.
Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph
2017-01-01
In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.
Table-top soft x-ray microscope using laser-induced plasma from a pulsed gas jet.
Müller, Matthias; Mey, Tobias; Niemeyer, Jürgen; Mann, Klaus
2014-09-22
An extremely compact soft x-ray microscope operating in the "water window" region at the wavelength λ = 2.88 nm is presented, making use of a long-term stable and nearly debris-free laser-induced plasma from a pulsed nitrogen gas jet target. The well characterized soft x-ray radiation is focused by an ellipsoidal grazing incidence condenser mirror. Imaging of a sample onto a CCD camera is achieved with a Fresnel zone plate using magnifications up to 500x. The spatial resolution of the recorded microscopic images is about 100 nm as demonstrated for a Siemens star test pattern.
Examining the Angular Resolution of the Astro-H's Soft X-Ray Telescopes
NASA Technical Reports Server (NTRS)
Sato, Toshiki; Iizuka, Ryo; Ishida, Manabu; Kikuchi, Naomichi; Maeda, Yoshitomo; Kurashima, Sho; Nakaniwa, Nozomi; Tomikawa, Kazuki; Hayashi, Takayuki; Mori, Hideyuki;
2016-01-01
The international x-ray observatory ASTRO-H was renamed Hitomi after launch. It covers a wide energy range from a few hundred eV to 600 keV. It is equipped with two soft x-ray telescopes (SXTs: SXT-I and SXT-S) for imaging the soft x-ray sky up to 12 keV, which focus an image onto the respective focal-plane detectors: CCD camera (SXI) and a calorimeter (SXS). The SXTs are fabricated in a quadrant unit. The angular resolution in half-power diameter (HPD) of each quadrant of the SXTs ranges between 1.1 and 1.4 arc min at 4.51 keV. It was also found that one quadrant has an energy dependence on the HPD. We examine the angular resolution with spot scan measurements. In order to understand the cause of imaging capability deterioration and to reflect it to the future telescope development, we carried out spot scan measurements, in which we illuminate all over the aperture of each quadrant with a square beam 8 mm on a side. Based on the scan results, we made maps of image blurring and a focus position. The former and the latter reflect figure error and positioning error, respectively, of the foils that are within the incident 8 mm x 8 mm beam. As a result, we estimated those errors in a quadrant to be approx. 0.9 to 1.0 and approx. 0.6 to 0.9 arc min, respectively. We found that the larger the positioning error in a quadrant is, the larger its HPD is. The HPD map, which manifests the local image blurring, is very similar from quadrant to quadrant, but the map of the focus position is different from location to location in each telescope. It is also found that the difference in local performance causes energy dependence of the HPD.
Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya
2003-01-01
The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.;
2012-01-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be 00:52:00, 00:54:00,..., and 01:04:00. The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.25.
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.; Blandford, R. D.; Bonamente, E.; Borgland, A. W.; Bregeon, J.; Briggs, M. S.; Brigida, M.; Bruel, P.; Buehler, R.; Burgess, J. M.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Casandjian, J. M.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Connaughton, V.; Conrad, J.; Cutini, S.; Dennis, B. R.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Dubois, R.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Fortin, P.; Fukazawa, Y.; Fusco, P.; Gargano, F.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grillo, L.; Grove, J. E.; Gruber, D.; Guiriec, S.; Hadasch, D.; Hayashida, M.; Hays, E.; Horan, D.; Iafrate, G.; Jóhannesson, G.; Johnson, A. S.; Johnson, W. N.; Kamae, T.; Kippen, R. M.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Meegan, C.; Mehault, J.; Michelson, P. F.; Mitthumsiri, W.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Murphy, R.; Naumann-Godo, M.; Nuss, E.; Nymark, T.; Ohno, M.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Paciesas, W. S.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Petrosian, V.; Pierbattista, M.; Piron, F.; Pivato, G.; Poon, H.; Porter, T. A.; Preece, R.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Ritz, S.; Sbarra, C.; Schwartz, R. A.; Sgrò, C.; Share, G. H.; Siskind, E. J.; Spinelli, P.; Takahashi, H.; Tanaka, T.; Tanaka, Y.; Thayer, J. B.; Tibaldo, L.; Tinivella, M.; Tolbert, A. K.; Tosti, G.; Troja, E.; Uchiyama, Y.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; von Kienlin, A.; Waite, A. P.; Wilson-Hodge, C.; Wood, D. L.; Wood, K. S.; Yang, Z.
2012-04-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be "00:52:00," "00:54:00," ... , and "01:04:00." The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.
Investigating the impact of spatial priors on the performance of model-based IVUS elastography
Richards, M S; Doyley, M M
2012-01-01
This paper describes methods that provide pre-requisite information for computing circumferential stress in modulus elastograms recovered from vascular tissue—information that could help cardiologists detect life-threatening plaques and predict their propensity to rupture. The modulus recovery process is an ill-posed problem; therefore additional information is needed to provide useful elastograms. In this work, prior geometrical information was used to impose hard or soft constraints on the reconstruction process. We conducted simulation and phantom studies to evaluate and compare modulus elastograms computed with soft and hard constraints versus those computed without any prior information. The results revealed that (1) the contrast-to-noise ratio of modulus elastograms achieved using the soft prior and hard prior reconstruction methods exceeded those computed without any prior information; (2) the soft prior and hard prior reconstruction methods could tolerate up to 8 % measurement noise; and (3) the performance of soft and hard prior modulus elastogram degraded when incomplete spatial priors were employed. This work demonstrates that including spatial priors in the reconstruction process should improve the performance of model-based elastography, and the soft prior approach should enhance the robustness of the reconstruction process to errors in the geometrical information. PMID:22037648
NASA Astrophysics Data System (ADS)
Onorato, M. Romina; Perucca, Laura; Coronato, Andrea; Rabassa, Jorge; López, Ramiro
2016-10-01
In this paper, evidence of paleoearthquake-induced soft-sediment deformation structures associated with the Magallanes-Fagnano Fault System in the Isla Grande de Tierra del Fuego, southern Argentina, has been identified. Well-preserved soft-sediment deformation structures were found in a Holocene sequence of the Udaeta pond. These structures were analyzed in terms of their geometrical characteristics, deformation mechanism, driving force system and possible trigger agent. They were also grouped in different morphological types: sand dykes, convolute lamination, load structures and faulted soft-sediment deformation features. Udaeta, a small pond in Argentina Tierra del Fuego, is considered a Quaternary pull-apart basin related to the Magallanes-Fagnano Fault System. The recognition of these seismically-induced features is an essential tool for paleoseismic studies. Since the three main urban centers in the Tierra del Fuego province of Argentina (Ushuaia, Río Grande and Tolhuin) have undergone an explosive growth in recent years, the results of this study will hopefully contribute to future analyses of the seismic risk of the region.
Stress fields in soft material induced by injection of highly-focused microjets
NASA Astrophysics Data System (ADS)
Miyazaki, Yuta; Endo, Nanami; Kawamoto, Sennosuke; Kiyama, Akihito; Tagawa, Yoshiyuki
2017-11-01
Needle-free drug injection systems using high-speed microjets are of great importance for medical innovations since they can solve problems of the conventional needle injection systems. However, the mechanical stress acting on the skin/muscle of patients during the penetration of liquid-drug microjets had not been clarified. In this study we investigate the stress caused by the penetration of microjets into soft materials, which is compared with the stress induced by the penetration of needles. In order to capture high-speed temporal evolution of the stress field inside the material, we utilized a high-speed polarized camera and gelatin that resembles human skin. Remarkably we find clear differences in the stress fields induced by microjets and needles. On one hand, high shear stress induced by the microjets is attenuated immediately after the injection, even though the liquid stays inside the soft material. On the other hand, high-shear stress induced by the needles stays and never decays unless the needles are entirely removed from the material. JSPS KAKENHI Grant Numbers 26709007 and 17H01246.
The Forced Soft Spring Equation
ERIC Educational Resources Information Center
Fay, T. H.
2006-01-01
Through numerical investigations, this paper studies examples of the forced Duffing type spring equation with [epsilon] negative. By performing trial-and-error numerical experiments, the existence is demonstrated of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions. Subharmonic boundaries are…
Al-Obaidi, Saud; Al-Sayegh, Nowall; Nadar, Mohammed
2014-07-01
Grip strength assessment reflects on overall health of the musculoskeletal system and is a predictor of functional prognosis and mortality. The purpose of this study was: examine whether grip-strength and fatigue resistance are impaired in smokers, determine if smoking-related impairments (fatigue-index) can be predicted by demographic data, duration of smoking, packets smoked-per-day, and physical activity. Maximum isometric grip strength (MIGS) of male smokers (n = 111) and nonsmokers (n = 66) was measured before/after induced fatigue using Jamar dynamometer at 5-handle positions. Fatigue index was calculated based on percentage change in MIGS initially and after induced fatigue. Number of repetitions to squeeze the soft rubber ball to induce fatigue was significantly lower in smokers compared with nonsmokers (t = 10.6, P < .001 dominant hand; t = 13.9, P < .001 nondominant), demonstrating a significantly higher fatigue-index for smokers than nonsmokers (t = -8.7, P < .001 dominant hand; t = -6.0, P < .001 nondominant). The effect of smoking status on MIGS scores was significantly different between smokers and nonsmokers after induced fatigue (β = -3.98, standard error = 0.59, P < .001) where smokers experienced on average a reduction of nearly 4 MIGS less than nonsmokers before fatigue. Smoking status was the strongest significant independent predictor of the fatigue-index. Smokers demonstrated reduced grip strength and fast fatigability in comparison with nonsmokers.
Foot Modeling and Smart Plantar Pressure Reconstruction from Three Sensors
Ghaida, Hussein Abou; Mottet, Serge; Goujon, Jean-Marc
2014-01-01
In order to monitor pressure under feet, this study presents a biomechanical model of the human foot. The main elements of the foot that induce the plantar pressure distribution are described. Then the link between the forces applied at the ankle and the distribution of the plantar pressure is established. Assumptions are made by defining the concepts of a 3D internal foot shape, which can be extracted from the plantar pressure measurements, and a uniform elastic medium, which describes the soft tissues behaviour. In a second part, we show that just 3 discrete pressure sensors per foot are enough to generate real time plantar pressure cartographies in the standing position or during walking. Finally, the generated cartographies are compared with pressure cartographies issued from the F-SCAN system. The results show 0.01 daN (2% of full scale) average error, in the standing position. PMID:25400713
Foot modeling and smart plantar pressure reconstruction from three sensors.
Ghaida, Hussein Abou; Mottet, Serge; Goujon, Jean-Marc
2014-01-01
In order to monitor pressure under feet, this study presents a biomechanical model of the human foot. The main elements of the foot that induce the plantar pressure distribution are described. Then the link between the forces applied at the ankle and the distribution of the plantar pressure is established. Assumptions are made by defining the concepts of a 3D internal foot shape, which can be extracted from the plantar pressure measurements, and a uniform elastic medium, which describes the soft tissues behaviour. In a second part, we show that just 3 discrete pressure sensors per foot are enough to generate real time plantar pressure cartographies in the standing position or during walking. Finally, the generated cartographies are compared with pressure cartographies issued from the F-SCAN system. The results show 0.01 daN (2% of full scale) average error, in the standing position.
NASA Technical Reports Server (NTRS)
Marshall, Cheryl J.; Marshall, Paul W.
1999-01-01
This portion of the Short Course is divided into two segments to separately address the two major proton-related effects confronting satellite designers: ionization effects and displacement damage effects. While both of these topics are deeply rooted in "traditional" descriptions of space radiation effects, there are several factors at play to cause renewed concern for satellite systems being designed today. For example, emphasis on Commercial Off-The-Shelf (COTS) technologies in both commercial and government systems increases both Total Ionizing Dose (TID) and Single Event Effect (SEE) concerns. Scaling trends exacerbate the problems, especially with regard to SEEs where protons can dominate soft error rates and even cause destructive failure. In addition, proton-induced displacement damage at fluences encountered in natural space environments can cause degradation in modern bipolar circuitry as well as in many emerging electronic and opto-electronic technologies.
Equivalent dynamic model of DEMES rotary joint
NASA Astrophysics Data System (ADS)
Zhao, Jianwen; Wang, Shu; Xing, Zhiguang; McCoul, David; Niu, Junyang; Huang, Bo; Liu, Liwu; Leng, Jinsong
2016-07-01
The dielectric elastomer minimum energy structure (DEMES) can realize large angular deformations by a small voltage-induced strain of the dielectric elastomer (DE), so it is a suitable candidate to make a rotary joint for a soft robot. Dynamic analysis is necessary for some applications, but the dynamic response of DEMESs is difficult to model because of the complicated morphology and viscoelasticity of the DE film. In this paper, a method composed of theoretical analysis and experimental measurement is presented to model the dynamic response of a DEMES rotary joint under an alternating voltage. Based on measurements of equivalent driving force and damping of the DEMES, the model can be derived. Some experiments were carried out to validate the equivalent dynamic model. The maximum angle error between model and experiment is greater than ten degrees, but it is acceptable to predict angular velocity of the DEMES, therefore, it can be applied in feedforward-feedback compound control.
Development of Biological Acoustic Impedance Microscope and its Error Estimation
NASA Astrophysics Data System (ADS)
Hozumi, Naohiro; Nakano, Aiko; Terauchi, Satoshi; Nagao, Masayuki; Yoshida, Sachiko; Kobayashi, Kazuto; Yamamoto, Seiji; Saijo, Yoshifumi
This report deals with the scanning acoustic microscope for imaging cross sectional acoustic impedance of biological soft tissues. A focused acoustic beam was transmitted to the tissue object mounted on the "rear surface" of plastic substrate. A cerebellum tissue of rat and a reference material were observed at the same time under the same condition. As the incidence is not vertical, not only longitudinal wave but also transversal wave is generated in the substrate. The error in acoustic impedance assuming vertical incidence was estimated. It was proved that the error can precisely be compensated, if the beam pattern and acoustic parameters of coupling medium and substrate had been known.
Microcircuit radiation effects databank
NASA Technical Reports Server (NTRS)
1983-01-01
This databank is the collation of radiation test data submitted by many testers and serves as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. It contains radiation sensitivity results from ground tests and is divided into two sections. Section A lists total dose damage information, and section B lists single event upset cross sections, I.E., the probability of a soft error (bit flip) or of a hard error (latchup).
Hewitt, Kelly; Lin, Hsin; Faraklas, Iris; Morris, Stephen; Cochran, Amalia; Saffle, Jeffrey
2014-01-01
The routine use of high-dose opioids for analgesia in patients with acute burns and soft-tissue injuries often leads to the development of opioid-induced constipation. The opioid antagonist methylnaltrexone (MLTX) reverses narcotic-related ileus without affecting systemic pain treatment. The authors' burn center developed a bowel protocol that included administration of MLTX for relief of opioid-induced constipation after other methods failed. The authors performed a retrospective review of patients with acute burns or necrotizing soft-tissue infections, who had been given subcutaneous MLTX to induce laxation. All patients who received MLTX were included and all administrations of the drug were included in the analysis. The primary outcome examined was time to laxation from drug administration. Forty-eight patients received MLTX a total of 112 times. Six patients were admitted with soft-tissue injuries and the rest suffered burns with an average TBSA of 17%. The median patient age was 41 years and the majority (75%) were men. Administration of a single dose of MLTX resulted in laxation within 4 hours in 38% of cases, and within 24 hours in 68%. Patients given MLTX received an average of 174 mg morphine equivalents daily for pain control. MLTX was given after an average of 52 hours since the last bowel movement. As this experience has evolved, it has been incorporated into an organized bowel protocol, which includes MLTX administration after other laxatives have failed. MLTX is an effective laxation agent in patients with burn and soft-tissue injuries, who have failed conventional agents.
An Avoidance Model for Short-Range Order Induced by Soft Repulsions in Systems of Rigid Rods
NASA Astrophysics Data System (ADS)
Han, Jining; Herzfeld, Judith
1996-03-01
The effects of soft repulsions on hard particle systems are calculated using an avoidance model which improves upon the simple mean field approximation. Avoidance reduces, but does not eliminate, the energy due to soft repulsions. On the other hand, it also reduces the configurational entropy. Under suitable conditions, this simple trade-off yields a free energy that is lower than the mean field value. In these cases, the variationally determined avoidance gives an estimate for the short-range positional order induced by soft repulsions. The results indicate little short-range order for isotropically oriented rods. However, for parallel rods, short-range order increases to significant levels as the particle axial ratio increases. The implications for long- range positional ordering are also discussed. In particular, avoidance may explain the smectic ordering of tobacco mosaic virus at volume fractions lower than those necessary for smectic ordering of hard particles.
Patient-specific polyetheretherketone facial implants in a computer-aided planning workflow.
Guevara-Rojas, Godoberto; Figl, Michael; Schicho, Kurt; Seemann, Rudolf; Traxler, Hannes; Vacariu, Apostolos; Carbon, Claus-Christian; Ewers, Rolf; Watzinger, Franz
2014-09-01
In the present study, we report an innovative workflow using polyetheretherketone (PEEK) patient-specific implants for esthetic corrections in the facial region through onlay grafting. The planning includes implant design according to virtual osteotomy and generation of a subtraction volume. The implant design was refined by stepwise changing the implant geometry according to soft tissue simulations. One patient was scanned using computed tomography. PEEK implants were interactively designed and manufactured using rapid prototyping techniques. Positioning intraoperatively was assisted by computer-aided navigation. Two months after surgery, a 3-dimensional surface model of the patient's face was generated using photogrammetry. Finally, the Hausdorff distance calculation was used to quantify the overall error, encompassing the failures in soft tissue simulation and implantation. The implant positioning process during surgery was satisfactory. The simulated soft tissue surface and the photogrammetry scan of the patient showed a high correspondence, especially where the skin covered the implants. The mean total error (Hausdorff distance) was 0.81 ± 1.00 mm (median 0.48, interquartile range 1.11). The spatial deviation remained less than 0.7 mm for the vast majority of points. The proposed workflow provides a complete computer-aided design, computer-aided manufacturing, and computer-aided surgery chain for implant design, allowing for soft tissue simulation, fabrication of patient-specific implants, and image-guided surgery to position the implants. Much of the surgical complexity resulting from osteotomies of the zygoma, chin, or mandibular angle might be transferred into the planning phase of patient-specific implants. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Cheng, Tzu-Han; Tsai, Chen-Gia
2016-01-01
Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners' respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV) was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners' respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listeners' heart rate in response to the following music passage. These findings have potential implications for future research on the temporal dynamics of musical emotions.
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2017-11-01
Precipitation plays an important role in determining the climate of a region. Precise estimation of precipitation is required to manage and plan water resources, as well as other related applications such as hydrology, climatology, meteorology and agriculture. Time series of hydrologic variables such as precipitation are composed of deterministic and stochastic parts. Despite this fact, the stochastic part of the precipitation data is not usually considered in modeling of precipitation process. As an innovation, the present study introduces three new hybrid models by integrating soft computing methods including multivariate adaptive regression splines (MARS), Bayesian networks (BN) and gene expression programming (GEP) with a time series model, namely generalized autoregressive conditional heteroscedasticity (GARCH) for modeling of the monthly precipitation. For this purpose, the deterministic (obtained by soft computing methods) and stochastic (obtained by GARCH time series model) parts are combined with each other. To carry out this research, monthly precipitation data of Babolsar, Bandar Anzali, Gorgan, Ramsar, Tehran and Urmia stations with different climates in Iran were used during the period of 1965-2014. Root mean square error (RMSE), relative root mean square error (RRMSE), mean absolute error (MAE) and determination coefficient (R2) were employed to evaluate the performance of conventional/single MARS, BN and GEP, as well as the proposed MARS-GARCH, BN-GARCH and GEP-GARCH hybrid models. It was found that the proposed novel models are more precise than single MARS, BN and GEP models. Overall, MARS-GARCH and BN-GARCH models yielded better accuracy than GEP-GARCH. The results of the present study confirmed the suitability of proposed methodology for precise modeling of precipitation.
Ji, Qiuzhi; Yoo, Young-Sik; Alam, Hira; Yoon, Geunyoung
2018-05-01
To characterise the impact of monofocal soft contact lens (SCL) and bifocal SCLs on refractive error, depth of focus (DoF) and orientation of blur in the peripheral visual field. Monofocal and two bifocal SCLs, Acuvue Bifocal (AVB, Johnson & Johnson) and Misight Dual Focus (DF, CooperVision) with +2.0 D add power were modelled using a ray tracing program (ZEMAX) based on their power maps. These SCLs were placed onto the anterior corneal surface of the simulated Atchison myopic eye model to correct for -3.0 D spherical refractive error at the fovea. To quantify through-focus retinal image quality, defocus from -3.5 D to 1.5 D in 0.5 D steps was induced at each horizontal eccentricity from 0 to 40° in 10° steps. Wavefront aberrations were computed for each visual eccentricity and defocus. The retinal images were simulated using a custom software program developed in Matlab (The MathWorks) by convolving the point spread function calculated from the aberration with a reference image. The convolved images were spatially filtered to match the spatial resolution limit of each peripheral eccentricity. Retinal image quality was then quantified by the 2-D cross-correlation between the filtered convolved retinal images and the reference image. Peripheral defocus, DoF and orientation of blur were also estimated. In comparison with the monofocal SCL, the bifocal SCLs degraded retinal image quality while DoF was increased at fovea. From 10 to 20°, a relatively small amount of myopic shift (less than 0.3 D) was induced by bifocal SCLs compared with monofocal. DoF was also increased with bifocal SCLs at peripheral vision of 10 and 20°. The trend of myopic shift became less consistent at larger eccentricity, where at 30° DF showed a 0.75 D myopic shift while AVB showed a 0.2 D hyperopic shift and both AVB and DF exhibited large relative hyperopic defocus at 40°. The anisotropy in orientation of blur was found to increase and change its direction through focus beyond central vision. This trend was found to be less dominant with bifocal SCLs compared to monofocal SCL. Bifocal SCLs have a relatively small impact on myopic shift in peripheral refractive error while DoF is increased significantly. We hypothetically suggest that a mechanism underlying myopia control with these bifocal or multifocal contact lenses is an increase in DoF and a decrease in anisotropy of peripheral optical blur. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.
Haptic communication between humans is tuned by the hard or soft mechanics of interaction
Usai, Francesco; Ganesh, Gowrishankar; Sanguineti, Vittorio; Burdet, Etienne
2018-01-01
To move a hard table together, humans may coordinate by following the dominant partner’s motion [1–4], but this strategy is unsuitable for a soft mattress where the perceived forces are small. How do partners readily coordinate in such differing interaction dynamics? To address this, we investigated how pairs tracked a target using flexion-extension of their wrists, which were coupled by a hard, medium or soft virtual elastic band. Tracking performance monotonically increased with a stiffer band for the worse partner, who had higher tracking error, at the cost of the skilled partner’s muscular effort. This suggests that the worse partner followed the skilled one’s lead, but simulations show that the results are better explained by a model where partners share movement goals through the forces, whilst the coupling dynamics determine the capacity of communicable information. This model elucidates the versatile mechanism by which humans can coordinate during both hard and soft physical interactions to ensure maximum performance with minimal effort. PMID:29565966
Lu, Min-Hua; Mao, Rui; Lu, Yin; Liu, Zheng; Wang, Tian-Fu; Chen, Si-Ping
2012-01-01
Indentation testing is a widely used approach to evaluate mechanical characteristics of soft tissues quantitatively. Young's modulus of soft tissue can be calculated from the force-deformation data with known tissue thickness and Poisson's ratio using Hayes' equation. Our group previously developed a noncontact indentation system using a water jet as a soft indenter as well as the coupling medium for the propagation of high-frequency ultrasound. The novel system has shown its ability to detect the early degeneration of articular cartilage. However, there is still lack of a quantitative method to extract the intrinsic mechanical properties of soft tissue from water jet indentation. The purpose of this study is to investigate the relationship between the loading-unloading curves and the mechanical properties of soft tissues to provide an imaging technique of tissue mechanical properties. A 3D finite element model of water jet indentation was developed with consideration of finite deformation effect. An improved Hayes' equation has been derived by introducing a new scaling factor which is dependent on Poisson's ratios v, aspect ratio a/h (the radius of the indenter/the thickness of the test tissue), and deformation ratio d/h. With this model, the Young's modulus of soft tissue can be quantitatively evaluated and imaged with the error no more than 2%. PMID:22927890
Neurological soft signs in children with attention deficit hyperactivity disorder.
Patankar, V C; Sangle, J P; Shah, Henal R; Dave, M; Kamath, R M
2012-04-01
Attention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental disorder with wide repercussions. Since it is etiologically related to delayed maturation, neurological soft signs (NSS) could be a tool to assess this. Further the correlation of NSS with severity and type of ADHD and presence of Specific Learning Disability (SLD) would give further insight into it. To study neurological soft signs and risk factors (type, mode of delivery, and milestones) in children with ADHD and to correlate NSS with type and severity of ADHD and with co-morbid Specific Learning Disability. The study was carried out in Child care services of a tertiary teaching urban hospital. It was a cross-sectional single interview study. 52 consecutive children diagnosed as having ADHD were assessed for the presence of neurological soft signs using Revised Physical and Neurological Examination soft Signs scale (PANESS). The ADHD was rated by parents using ADHD parent rating scale. The data was analyzed using the chi-squared test and Pearson's co-relational analysis. Neurological soft signs are present in 84% of children. They are equally present in both the inattentive-hyperactive and impulsive-hyperactive types of ADHD. The presence of neurological soft signs in ADHD are independent of the presence of co-morbid SLD. Dysrrhythmias and overflow with gait were typically seen for impulsive-hyperactive type and higher severity of ADHD is related to more errors.
Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software
NASA Astrophysics Data System (ADS)
Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg
2017-09-01
100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.
Novel intelligent real-time position tracking system using FPGA and fuzzy logic.
Soares dos Santos, Marco P; Ferreira, J A F
2014-03-01
The main aim of this paper is to test if FPGAs are able to achieve better position tracking performance than software-based soft real-time platforms. For comparison purposes, the same controller design was implemented in these architectures. A Multi-state Fuzzy Logic controller (FLC) was implemented both in a Xilinx(®) Virtex-II FPGA (XC2v1000) and in a soft real-time platform NI CompactRIO(®)-9002. The same sampling time was used. The comparative tests were conducted using a servo-pneumatic actuation system. Steady-state errors lower than 4 μm were reached for an arbitrary vertical positioning of a 6.2 kg mass when the controller was embedded into the FPGA platform. Performance gains up to 16 times in the steady-state error, up to 27 times in the overshoot and up to 19.5 times in the settling time were achieved by using the FPGA-based controller over the software-based FLC controller. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests
Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.
Is there a link between soft drinks and erectile dysfunction?
Adamowicz, Jan; Drewa, Tomasz
2011-01-01
This review focuses on the potential role of soft drinks, particularly the sugar component, in the pathogenesis of erectile dysfunction (ED). We analyzed the hypothetical link between metabolic disorders, induced by sweetened soft drinks overconsumption, and ED. High caloric intake, high refined-carbohydrates, and high fructose corn syrup (HFCS) content and less satiety are main factors responsible for metabolic disorders contributing to ED development. Regular diet mistakes among human males, such as soft drink consumption, may lead to slow and asymptomatic progression of ED, finally resulting in full claimed manifestation of ED.
NASA Astrophysics Data System (ADS)
ÁLvarez, A.; Orfila, A.; Tintoré, J.
2004-03-01
Satellites are the only systems able to provide continuous information on the spatiotemporal variability of vast areas of the ocean. Relatively long-term time series of satellite data are nowadays available. These spatiotemporal time series of satellite observations can be employed to build empirical models, called satellite-based ocean forecasting (SOFT) systems, to forecast certain aspects of future ocean states. SOFT systems can predict satellite-observed fields at different timescales. The forecast skill of SOFT systems forecasting the sea surface temperature (SST) at monthly timescales has been extensively explored in previous works. In this work we study the performance of two SOFT systems forecasting, respectively, the SST and sea level anomaly (SLA) at weekly timescales, that is, providing forecasts of the weekly averaged SST and SLA fields with 1 week in advance. The SOFT systems were implemented in the Ligurian Sea (Western Mediterranean Sea). Predictions from the SOFT systems are compared with observations and with the predictions obtained from persistence models. Results indicate that the SOFT system forecasting the SST field is always superior in terms of predictability to persistence. Minimum prediction errors in the SST are obtained during winter and spring seasons. On the other hand, the biggest differences between the performance of SOFT and persistence models are found during summer and autumn. These changes in the predictability are explained on the basis of the particular variability of the SST field in the Ligurian Sea. Concerning the SLA field, no improvements with respect to persistence have been found for the SOFT system forecasting the SLA field.
Latest trends in parts SEP susceptibility from heavy ions
NASA Technical Reports Server (NTRS)
Nichols, Donald K.; Smith, L. S.; Soli, George A.; Koga, R.; Kolasinski, W. A.
1989-01-01
JPL and Aerospace have collected a third set of heavy-ion single-event phenomena (SEP) test data since their last joint IEEE publications in December 1985 and December 1987. Trends in SEP susceptibility (e.g., soft errors and latchup) for state-of-the-art parts are presented. Results of the study indicate that hard technologies and unacceptably soft technologies can be flagged. In some instances, specific tested parts can be taken as candidates for key microprocessors or memories. As always with radiation test data, specific test data for qualified flight parts is recommended for critical applications.
Methods for Addressing Technology-induced Errors: The Current State.
Borycki, E; Dexheimer, J W; Hullin Lucay Cossio, C; Gong, Y; Jensen, S; Kaipio, J; Kennebeck, S; Kirkendall, E; Kushniruk, A W; Kuziemsky, C; Marcilly, R; Röhrig, R; Saranto, K; Senathirajah, Y; Weber, J; Takeda, H
2016-11-10
The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology- induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
NASA Astrophysics Data System (ADS)
Knoefel, Patrick; Loew, Fabian; Conrad, Christopher
2015-04-01
Crop maps based on classification of remotely sensed data are of increased attendance in agricultural management. This induces a more detailed knowledge about the reliability of such spatial information. However, classification of agricultural land use is often limited by high spectral similarities of the studied crop types. More, spatially and temporally varying agro-ecological conditions can introduce confusion in crop mapping. Classification errors in crop maps in turn may have influence on model outputs, like agricultural production monitoring. One major goal of the PhenoS project ("Phenological structuring to determine optimal acquisition dates for Sentinel-2 data for field crop classification"), is the detection of optimal phenological time windows for land cover classification purposes. Since many crop species are spectrally highly similar, accurate classification requires the right selection of satellite images for a certain classification task. In the course of one growing season, phenological phases exist where crops are separable with higher accuracies. For this purpose, coupling of multi-temporal spectral characteristics and phenological events is promising. The focus of this study is set on the separation of spectrally similar cereal crops like winter wheat, barley, and rye of two test sites in Germany called "Harz/Central German Lowland" and "Demmin". However, this study uses object based random forest (RF) classification to investigate the impact of image acquisition frequency and timing on crop classification uncertainty by permuting all possible combinations of available RapidEye time series recorded on the test sites between 2010 and 2014. The permutations were applied to different segmentation parameters. Then, classification uncertainty was assessed and analysed, based on the probabilistic soft-output from the RF algorithm at the per-field basis. From this soft output, entropy was calculated as a spatial measure of classification uncertainty. The results indicate that uncertainty estimates provide a valuable addition to traditional accuracy assessments and helps the user to allocate error in crop maps.
A Randomized Trial of Soft Multifocal Contact Lenses for Myopia Control: Baseline Data and Methods.
Walline, Jeffrey J; Gaume Giannoni, Amber; Sinnott, Loraine T; Chandler, Moriah A; Huang, Juan; Mutti, Donald O; Jones-Jordan, Lisa A; Berntsen, David A
2017-09-01
The Bifocal Lenses In Nearsighted Kids (BLINK) study is the first soft multifocal contact lens myopia control study to compare add powers and measure peripheral refractive error in the vertical meridian, so it will provide important information about the potential mechanism of myopia control. The BLINK study is a National Eye Institute-sponsored, double-masked, randomized clinical trial to investigate the effects of soft multifocal contact lenses on myopia progression. This article describes the subjects' baseline characteristics and study methods. Subjects were 7 to 11 years old, had -0.75 to -5.00 spherical component and less than 1.00 diopter (D) astigmatism, and had 20/25 or better logMAR distance visual acuity with manifest refraction in each eye and with +2.50-D add soft bifocal contact lenses on both eyes. Children were randomly assigned to wear Biofinity single-vision, Biofinity Multifocal "D" with a +1.50-D add power, or Biofinity Multifocal "D" with a +2.50-D add power contact lenses. We examined 443 subjects at the baseline visits, and 294 (66.4%) subjects were enrolled. Of the enrolled subjects, 177 (60.2%) were female, and 200 (68%) were white. The mean (± SD) age was 10.3 ± 1.2 years, and 117 (39.8%) of the eligible subjects were younger than 10 years. The mean spherical equivalent refractive error, measured by cycloplegic autorefraction was -2.39 ± 1.00 D. The best-corrected binocular logMAR visual acuity with glasses was +0.01 ± 0.06 (20/21) at distance and -0.03 ± 0.08 (20/18) at near. The BLINK study subjects are similar to patients who would routinely be eligible for myopia control in practice, so the results will provide clinical information about soft bifocal contact lens myopia control as well as information about the mechanism of the treatment effect, if one occurs.
Bite force measurements with hard and soft bite surfaces.
Serra, C M; Manns, A E
2013-08-01
Bite force has been measured by different methods and over a wide variety of designs. In several instruments, the fact that bite surface has been manufactured with stiff materials might interfere in obtaining reliable data, by a more prompt activation of inhibitory reflex mechanisms. The purpose of this study was to compare the maximum voluntary bite force measured by a digital occlusal force gauge (GM10 Nagano Keiki, Japan) between different opponent teeth, employing semi-hard or soft bite surfaces. A sample of 34 young adults with complete natural dentition was studied. The original semi-hard bite surface was exchanged by a soft one, made of leather and rubber. Maximum voluntary bite force recordings were made for each tooth group and for both bite surfaces. Statistical analyses (Student's t-test) revealed significant differences, with higher scores while using the soft surface across sexes and tooth groups (P < 0·05). Differential activation of periodontal mechanoreceptors of a specific tooth group is mainly conditioned by the hardness of the bite surface; a soft surface induces greater activation of elevator musculature, while a hard one induces inhibition more promptly. Thus, soft bite surfaces are recommended for higher reliability in maximum voluntary bite force recordings. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Kang, Wonmo; Chen, YungChia; Bagchi, Amit; O'Shaughnessy, Thomas J.
2017-12-01
The material response of biologically relevant soft materials, e.g., extracellular matrix or cell cytoplasm, at high rate loading conditions is becoming increasingly important for emerging medical implications including the potential of cavitation-induced brain injury or cavitation created by medical devices, whether intentional or not. However, accurately probing soft samples remains challenging due to their delicate nature, which often excludes the use of conventional techniques requiring direct contact with a sample-loading frame. We present a drop-tower-based method, integrated with a unique sample holder and a series of effective springs and dampers, for testing soft samples with an emphasis on high-rate loading conditions. Our theoretical studies on the transient dynamics of the system show that well-controlled impacts between a movable mass and sample holder can be used as a means to rapidly load soft samples. For demonstrating the integrated system, we experimentally quantify the critical acceleration that corresponds to the onset of cavitation nucleation for pure water and 7.5% gelatin samples. This study reveals that 7.5% gelatin has a significantly higher, approximately double, critical acceleration as compared to pure water. Finally, we have also demonstrated a non-optical method of detecting cavitation in soft materials by correlating cavitation collapse with structural resonance of the sample container.
Giant voltage-induced deformation of a dielectric elastomer under a constant pressure
NASA Astrophysics Data System (ADS)
Godaba, Hareesh; Foo, Choon Chiang; Zhang, Zhi Qian; Khoo, Boo Cheong; Zhu, Jian
2014-09-01
Dielectric elastomer actuators coupled with liquid have recently been developed as soft pumps, soft lenses, Braille displays, etc. In this paper, we investigate the performance of a dielectric elastomer actuator, which is coupled with water. The experiments demonstrate that the membrane of a dielectric elastomer can achieve a giant voltage-induced area strain of 1165%, when subject to a constant pressure. Both theory and experiment show that the pressure plays an important role in determining the electromechanical behaviour. The experiments also suggest that the dielectric elastomer actuators, when coupled with liquid, may suffer mechanical instability and collapse after a large amount of liquid is enclosed by the membrane. This failure mode needs to be taken into account in designing soft actuators.
Effect of single vision soft contact lenses on peripheral refraction.
Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen
2012-07-01
To investigate changes in peripheral refraction with under-, full, and over-correction of central refraction with commercially available single vision soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere SCLs to under-correct (+0.75 DS), fully correct, and over-correct (-0.75 DS) their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with different levels of SCL central refractive error correction. The uncorrected refractive error was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared to center at 30 and 35° in the temporal visual field (VF) in low myopes and at 30 and 35° in the temporal VF and 10, 30, and 35° in the nasal VF in moderate myopes. All levels of SCL correction caused a hyperopic shift in refraction at all locations in the horizontal VF. The smallest hyperopic shift was demonstrated with under-correction followed by full correction and then by over-correction of central refractive error. An increase in relative peripheral hyperopia was measured with full correction SCLs compared with no correction in both low and moderate myopes. However, no difference in relative peripheral refraction profiles were found between under-, full, and over-correction. Under-, full, and over-correction of central refractive error with single vision SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. All levels of SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction, to experience absolute hyperopic defocus. This peripheral hyperopia may be a possible cause of myopia progression reported with different types and levels of myopia correction.
NASA Astrophysics Data System (ADS)
Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.
2017-02-01
The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.
NASA Astrophysics Data System (ADS)
Chertok, I. M.; Belov, A. V.
2018-03-01
Correction to: Solar Phys https://doi.org/10.1007/s11207-017-1169-1 We found an important error in the text of our article. On page 6, the second sentence of Section 3.2 "We studied the variations in soft X-ray flare characteristics in more detail by averaging them within the running windows of ± one Carrington rotation with a step of two rotations." should instead read "We studied the variations in soft X-ray flare characteristics in more detail by averaging them within the running windows of ± 2.5 Carrington rotations with a step of two rotations." We regret the inconvenience. The online version of the original article can be found at https://doi.org/10.1007/s11207-017-1169-1
Cheng, Tzu-Han; Tsai, Chen-Gia
2016-01-01
Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners’ respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV) was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners’ respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listeners’ heart rate in response to the following music passage. These findings have potential implications for future research on the temporal dynamics of musical emotions. PMID:26925009
Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.
Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun
2018-01-01
Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
A Computing Method to Determine the Performance of an Ionic Liquid Gel Soft Actuator
Zhang, Chenghong; Zhou, Yanmin; Wang, Zhipeng
2018-01-01
A new type of soft actuator material—an ionic liquid gel (ILG) that consists of BMIMBF4, HEMA, DEAP, and ZrO2—is polymerized into a gel state under ultraviolet (UV) light irradiation. In this paper, we first propose that the ILG conforms to the assumptions of hyperelastic theory and that the Mooney-Rivlin model can be used to study the properties of the ILG. Under the five-parameter and nine-parameter Mooney-Rivlin models, the formulas for the calculation of the uniaxial tensile stress, plane uniform tensile stress, and 3D directional stress are deduced. The five-parameter and nine-parameter Mooney-Rivlin models of the ILG with a ZrO2 content of 3 wt% were obtained by uniaxial tensile testing, and the parameters are denoted as c10, c01, c20, c11, and c02 and c10, c01, c20, c11, c02, c30, c21, c12, and c03, respectively. Through the analysis and comparison of the uniaxial tensile stress between the calculated and experimental data, the error between the stress data calculated from the five-parameter Mooney-Rivlin model and the experimental data is less than 0.51%, and the error between the stress data calculated from the nine-parameter Mooney-Rivlin model and the experimental data is no more than 8.87%. Hence, our work presents a feasible and credible formula for the calculation of the stress of the ILG. This work opens a new path to assess the performance of a soft actuator composed of an ILG and will contribute to the optimized design of soft robots. PMID:29853999
A Computing Method to Determine the Performance of an Ionic Liquid Gel Soft Actuator.
He, Bin; Zhang, Chenghong; Zhou, Yanmin; Wang, Zhipeng
2018-01-01
A new type of soft actuator material-an ionic liquid gel (ILG) that consists of BMIMBF 4 , HEMA, DEAP, and ZrO 2 -is polymerized into a gel state under ultraviolet (UV) light irradiation. In this paper, we first propose that the ILG conforms to the assumptions of hyperelastic theory and that the Mooney-Rivlin model can be used to study the properties of the ILG. Under the five-parameter and nine-parameter Mooney-Rivlin models, the formulas for the calculation of the uniaxial tensile stress, plane uniform tensile stress, and 3D directional stress are deduced. The five-parameter and nine-parameter Mooney-Rivlin models of the ILG with a ZrO 2 content of 3 wt% were obtained by uniaxial tensile testing, and the parameters are denoted as c 10 , c 01 , c 20 , c 11 , and c 02 and c 10 , c 01 , c 20 , c 11 , c 02 , c 30 , c 21 , c 12 , and c 03 , respectively. Through the analysis and comparison of the uniaxial tensile stress between the calculated and experimental data, the error between the stress data calculated from the five-parameter Mooney-Rivlin model and the experimental data is less than 0.51%, and the error between the stress data calculated from the nine-parameter Mooney-Rivlin model and the experimental data is no more than 8.87%. Hence, our work presents a feasible and credible formula for the calculation of the stress of the ILG. This work opens a new path to assess the performance of a soft actuator composed of an ILG and will contribute to the optimized design of soft robots.
Methods for Addressing Technology-Induced Errors: The Current State
Dexheimer, J. W.; Hullin Lucay Cossio, C.; Gong, Y.; Jensen, S.; Kaipio, J.; Kennebeck, S.; Kirkendall, E.; Kushniruk, A. W.; Kuziemsky, C.; Marcilly, R.; Röhrig, R.; Saranto, K.; Senathirajah, Y.; Weber, J.; Takeda, H.
2016-01-01
Summary Objectives The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. Methods The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology-induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. Results The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Conclusions Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years. PMID:27830228
NASA Technical Reports Server (NTRS)
Miller, J. M.
1980-01-01
ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.
NASA Astrophysics Data System (ADS)
Wang, Xu; Zeng, Wei; Hong, Liang; Xu, Wenwen; Yang, Haokai; Wang, Fan; Duan, Huigao; Tang, Ming; Jiang, Hanqing
2018-03-01
Problems related to dendrite growth on lithium-metal anodes such as capacity loss and short circuit present major barriers to next-generation high-energy-density batteries. The development of successful lithium dendrite mitigation strategies is impeded by an incomplete understanding of the Li dendrite growth mechanisms, and in particular, Li-plating-induced internal stress in Li metal and its effect on Li growth morphology are not well addressed. Here, we reveal the enabling role of plating residual stress in dendrite formation through depositing Li on soft substrates and a stress-driven dendrite growth model. We show that dendrite growth is mitigated on such soft substrates through surface-wrinkling-induced stress relaxation in the deposited Li film. We demonstrate that this dendrite mitigation mechanism can be utilized synergistically with other existing approaches in the form of three-dimensional soft scaffolds for Li plating, which achieves higher coulombic efficiency and better capacity retention than that for conventional copper substrates.
Programmable Numerical Function Generators: Architectures and Synthesis Method
2005-08-01
generates HDL (Hardware Descrip- tion Language) code from the design specification described by Scilab [14], a MATLAB-like numerical calculation soft...cad.com/Error-NFG/. [14] Scilab 3.0, INRIA-ENPC, France, http://scilabsoft.inria.fr/ [15] M. J. Schulte and J. E. Stine, “Approximating elementary functions
Overview of Device SEE Susceptibility from Heavy Ions
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Coss, J. R.; McCarthy, K. P.; Schwartz, H. R.; Smith, L. S.
1998-01-01
A fifth set of heavy ion single event effects (SEE) test data have been collected since the last IEEE publications (1,2,3,4) in December issues for 1985, 1987, 1989, and 1991. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, W.T.; Siebers, J.V.; Bzdusek, K.
Purpose: To introduce methods to analyze Deformable Image Registration (DIR) and identify regions of potential DIR errors. Methods: DIR Deformable Vector Fields (DVFs) quantifying patient anatomic changes were evaluated using the Jacobian determinant and the magnitude of DVF curl as functions of tissue density and tissue type. These quantities represent local relative deformation and rotation, respectively. Large values in dense tissues can potentially identify non-physical DVF errors. For multiple DVFs per patient, histograms and visualization of DVF differences were also considered. To demonstrate the capabilities of methods, we computed multiple DVFs for each of five Head and Neck (H'N) patientsmore » (P1–P5) via a Fast-symmetric Demons (FSD) algorithm and via a Diffeomorphic Demons (DFD) algorithm, and show the potential to identify DVF errors. Results: Quantitative comparisons of the FSD and DFD registrations revealed <0.3 cm DVF differences in >99% of all voxels for P1, >96% for P2, and >90% of voxels for P3. While the FSD and DFD registrations were very similar for these patients, the Jacobian determinant was >50% in 9–15% of soft tissue and in 3–17% of bony tissue in each of these cases. The volumes of large soft tissue deformation were consistent for all five patients using the FSD algorithm (mean 15%±4% volume), whereas DFD reduced regions of large deformation by 10% volume (785 cm{sup 3}) for P4 and by 14% volume (1775 cm{sup 3}) for P5. The DFD registrations resulted in fewer regions of large DVF-curl; 50% rotations in FSD registrations averaged 209±136 cm{sup 3} in soft tissue and 10±11 cm{sup 3} in bony tissue, but using DFD these values were reduced to 42±53 cm{sup 3} and 1.1±1.5 cm{sup 3}, respectively. Conclusion: Analysis of Jacobian determinant and curl as functions of tissue density can identify regions of potential DVF errors by identifying non-physical deformations and rotations. Collaboration with Phillips Healthcare, as indicated in authorship.« less
Reduction of Orifice-Induced Pressure Errors
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.
1987-01-01
Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.
Importance of core electrostatic properties on the electrophoresis of a soft particle
NASA Astrophysics Data System (ADS)
De, Simanta; Bhattacharyya, Somnath; Gopmandal, Partha P.
2016-08-01
The impact of the volumetric charged density of the dielectric rigid core on the electrophoresis of a soft particle is analyzed numerically. The volume charge density of the inner core of a soft particle can arise for a dendrimer structure or bacteriophage MS2. We consider the electrokinetic model based on the conservation principles, thus no conditions for Debye length or applied electric field is imposed. The fluid flow equations are coupled with the ion transport equations and the equation for the electric field. The occurrence of the induced nonuniform surface charge density on the outer surface of the inner core leads to a situation different from the existing analysis of a soft particle electrophoresis. The impact of this induced surface charge density together with the double-layer polarization and relaxation due to ion convection and electromigration is analyzed. The dielectric permittivity and the charge density of the core have a significant impact on the particle electrophoresis when the Debye length is in the order of the particle size. We find that by varying the ionic concentration of the electrolyte, the particle can exhibit reversal in its electrophoretic velocity. The role of the polymer layer softness parameter is addressed in the present analysis.
Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.
Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2014-08-01
The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.
A dielectric elastomer actuator coupled with water: snap-through instability and giant deformation
NASA Astrophysics Data System (ADS)
Godaba, Hareesh; Foo, Choon Chiang; Zhang, Zhi Qian; Khoo, Boo Cheong; Zhu, Jian
2015-04-01
A dielectric elastomer actuator is one class of soft actuators which can deform in response to voltage. Dielectric elastomer actuators coupled with liquid have recently been developed as soft pumps, soft lenses, Braille displays, etc. In this paper, we conduct experiments to investigate the performance of a dielectric elastomer actuator which is coupled with water. The membrane is subject to a constant water pressure, which is found to significantly affect the electromechanical behaviour of the membrane. When the pressure is small, the membrane suffers electrical breakdown before snap-through instability, and achieves a small voltage-induced deformation. When the pressure is higher to make the membrane near the verge of the instability, the membrane can achieve a giant voltage-induced deformation, with an area strain of 1165%. When the pressure is large, the membrane suffers pressure-induced snap-through instability and may collapse due to a large amount of liquid enclosed by the membrane. Theoretical analyses are conducted to interpret these experimental observations.
Decomposition of Fuzzy Soft Sets with Finite Value Spaces
Jun, Young Bae
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342
Decomposition of fuzzy soft sets with finite value spaces.
Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.
SEU induced errors observed in microprocessor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asenek, V.; Underwood, C.; Oldfield, M.
In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.
Chen, Wan-Chun; Lin, Hsi-Hui; Tang, Ming-Jer
2014-09-15
To explore whether matrix stiffness affects cell differentiation, proliferation, and transforming growth factor (TGF)-β1-induced epithelial-mesenchymal transition (EMT) in primary cultures of mouse proximal tubular epithelial cells (mPTECs), we used a soft matrix made from monomeric collagen type I-coated polyacrylamide gel or matrigel (MG). Both kinds of soft matrix benefited primary mPTECs to retain tubular-like morphology with differentiation and growth arrest and to evade TGF-β1-induced EMT. However, the potent effect of MG on mPTEC differentiation was suppressed by glutaraldehyde-induced cross-linking and subsequently stiffening MG or by an increasing ratio of collagen in the soft mixed gel. Culture media supplemented with MG also helped mPTECs to retain tubular-like morphology and a differentiated phenotype on stiff culture dishes as soft MG did. We further found that the protein level and activity of ERK were scaled with the matrix stiffness. U-0126, a MEK inhibitor, abolished the stiff matrix-induced dedifferentiation and proliferation. These data suggest that the ERK signaling pathway plays a vital role in matrix stiffness-regulated cell growth and differentiation. Taken together, both compliant property and specific MG signals from the matrix are required for the regulation of epithelial differentiation and proliferation. This study provides a basic understanding of how physical and chemical cues derived from the extracellular matrix regulate the physiological function of proximal tubules and the pathological development of renal fibrosis. Copyright © 2014 the American Physiological Society.
Prediction error induced motor contagions in human behaviors.
Ikegami, Tsuyoshi; Ganesh, Gowrishankar; Takeuchi, Tatsuya; Nakamoto, Hiroki
2018-05-29
Motor contagions refer to implicit effects on one's actions induced by observed actions. Motor contagions are believed to be induced simply by action observation and cause an observer's action to become similar to the action observed. In contrast, here we report a new motor contagion that is induced only when the observation is accompanied by prediction errors - differences between actions one observes and those he/she predicts or expects. In two experiments, one on whole-body baseball pitching and another on simple arm reaching, we show that the observation of the same action induces distinct motor contagions, depending on whether prediction errors are present or not. In the absence of prediction errors, as in previous reports, participants' actions changed to become similar to the observed action, while in the presence of prediction errors, their actions changed to diverge away from it, suggesting distinct effects of action observation and action prediction on human actions. © 2018, Ikegami et al.
Liu, Xiang; Effenberger, Frank; Chand, Naresh
2015-03-09
We demonstrate a flexible modulation and detection scheme for upstream transmission in passive optical networks using pulse position modulation at optical network unit, facilitating burst-mode detection with automatic decision threshold tracking, and DSP-enabled soft-combining at optical line terminal. Adaptive receiver sensitivities of -33.1 dBm, -36.6 dBm and -38.3 dBm at a bit error ratio of 10(-4) are respectively achieved for 2.5 Gb/s, 1.25 Gb/s and 625 Mb/s after transmission over a 20-km standard single-mode fiber without any optical amplification.
Multi-stage decoding of multi-level modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.
1991-01-01
Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).
Soft thermal contributions to 3-loop gauge coupling
NASA Astrophysics Data System (ADS)
Laine, M.; Schicho, P.; Schröder, Y.
2018-05-01
We analyze 3-loop contributions to the gauge coupling felt by ultrasoft ("magnetostatic") modes in hot Yang-Mills theory. So-called soft/hard terms, originating from dimension-six operators within the soft effective theory, are shown to cancel 1097/1098 of the IR divergence found in a recent determination of the hard 3-loop contribution to the soft gauge coupling. The remaining 1/1098 originates from ultrasoft/hard contributions, induced by dimension-six operators in the ultrasoft effective theory. Soft 3-loop contributions are likewise computed, and are found to be IR divergent, rendering the ultrasoft gauge coupling non-perturbative at relative order O({α}s^{3/2}) . We elaborate on the implications of these findings for effective theory studies of physical observables in thermal QCD.
Effect of the mandible on mouthguard measurements of head kinematics.
Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B
2016-06-14
Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Large poroelastic deformation of a soft material
NASA Astrophysics Data System (ADS)
MacMinn, Christopher W.; Dufresne, Eric R.; Wettlaufer, John S.
2014-11-01
Flow through a porous material will drive mechanical deformation when the fluid pressure becomes comparable to the stiffness of the solid skeleton. This has applications ranging from hydraulic fracture for recovery of shale gas, where fluid is injected at high pressure, to the mechanics of biological cells and tissues, where the solid skeleton is very soft. The traditional linear theory of poroelasticity captures this fluid-solid coupling by combining Darcy's law with linear elasticity. However, linear elasticity is only volume-conservative to first order in the strain, which can become problematic when damage, plasticity, or extreme softness lead to large deformations. Here, we compare the predictions of linear poroelasticity with those of a large-deformation framework in the context of two model problems. We show that errors in volume conservation are compounded and amplified by coupling with the fluid flow, and can become important even when the deformation is small. We also illustrate these results with a laboratory experiment.
NASA Astrophysics Data System (ADS)
Elfgen, S.; Franck, D.; Hameyer, K.
2018-04-01
Magnetic measurements are indispensable for the characterization of soft magnetic material used e.g. in electrical machines. Characteristic values are used as quality control during production and for the parametrization of material models. Uncertainties and errors in the measurements are reflected directly in the parameters of the material models. This can result in over-dimensioning and inaccuracies in simulations for the design of electrical machines. Therefore, existing influencing factors in the characterization of soft magnetic materials are named and their resulting uncertainties contributions studied. The analysis of the resulting uncertainty contributions can serve the operator as additional selection criteria for different measuring sensors. The investigation is performed for measurements within and outside the currently prescribed standard, using a Single sheet tester and its impact on the identification of iron loss parameter is studied.
Multipath induced errors in meteorological Doppler/interferometer location systems
NASA Technical Reports Server (NTRS)
Wallace, R. G.
1984-01-01
One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.
FPGA-Based, Self-Checking, Fault-Tolerant Computers
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2004-01-01
A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.
Towards a Framework for Managing Risk Associated with Technology-Induced Error.
Borycki, Elizabeth M; Kushniruk, Andre W
2017-01-01
Health information technologies (HIT) promised to streamline and modernize healthcare processes. However, a growing body of research has indicated that if such technologies are not designed, implemented or maintained properly this may lead to an increased incidence of new types of errors which the authors have referred to as "technology-induced errors". In this paper, framework is presented that can be used to manage HIT risk. The framework considers the reduction of technology-induced errors at different stages by managing risks associated with the implementation of HIT. Frameworks that allow health information technology managers to employ proactive and preventative approaches that can be used to manage the risks associated with technology-induced errors are critical to improving HIT safety and managing risk associated with implementing new technologies.
Ozone Profile Retrievals from the OMPS on Suomi NPP
NASA Astrophysics Data System (ADS)
Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.
2017-12-01
We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.
El-Terras, Adel; Soliman, Mohamed Mohamed; Alkhedaide, Adel; Attia, Hossam Fouad; Alharthy, Abdullah; Banaja, Abdel Elah
2016-04-01
In Saudi Arabia, the consumption of carbonated soft drinks is common and often occurs with each meal. Carbonated soft drink consumption has been shown to exhibit effects on the liver, kidney and bone. However, the effects of these soft drinks on brain activity have not been widely examined, particularly at the gene level. Therefore, the current study was conducted with the aim of evaluating the effects of chronic carbonated soft drink consumption on oxidative stress, brain gene biomarkers associated with aggression and brain histology. In total, 40 male Wistar rats were divided into four groups: Group 1 served as a control and was provided access to food and water ad libitum; and groups 2‑4 were given free access to food and carbonated soft drinks only (Cola for group 2, Pepsi for group 3 and 7‑UP for group 4). Animals were maintained on these diets for 3 consecutive months. Upon completion of the experimental period, animals were sacrificed and serological and histopathological analyses were performed on blood and tissues samples. Reverse transcription‑polymerase chain reaction was used to analyze alterations in gene expression levels. Results revealed that carbonated soft drinks increased the serum levels of malondialdehyde (MDA). Carbonated soft drinks were also observed to downregulate the expression of antioxidants glutathione reductase (GR), catalase and glutathione peroxidase (GPx) in the brain when compared with that in the control rats. Rats administered carbonated soft drinks also exhibited decreased monoamine oxidase A (MAO‑A) and acetylcholine esterase (AChE) serum and mRNA levels in the brain. In addition, soft drink consumption upregulated mRNA expression of dopamine D2 receptor (DD2R), while 5-hydroxytryptamine transporter (5‑HTT) expression was decreased. However, following histological examination, all rats had a normal brain structure. The results of this study demonstrated that that carbonated soft drinks induced oxidative stress and altered the expression of certain genes that are associated with the brain activity and thus should be consumed with caution.
Photothermal lesions in soft tissue induced by optical fiber microheaters.
Pimentel-Domínguez, Reinher; Moreno-Álvarez, Paola; Hautefeuille, Mathieu; Chavarría, Anahí; Hernández-Cordero, Juan
2016-04-01
Photothermal therapy has shown to be a promising technique for local treatment of tumors. However, the main challenge for this technique is the availability of localized heat sources to minimize thermal damage in the surrounding healthy tissue. In this work, we demonstrate the use of optical fiber microheaters for inducing thermal lesions in soft tissue. The proposed devices incorporate carbon nanotubes or gold nanolayers on the tips of optical fibers for enhanced photothermal effects and heating of ex vivo biological tissues. We report preliminary results of small size photothermal lesions induced on mice liver tissues. The morphology of the resulting lesions shows that optical fiber microheaters may render useful for delivering highly localized heat for photothermal therapy.
Trends in Device SEE Susceptibility from Heavy Ions
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Coss, J. R.; McCarty, K. P.; Schwartz, H. R.; Swift, G. M.; Watson, R. K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.
1995-01-01
The sixth set of heavy ion single event effects (SEE) test data have been collected since the last IEEE publications in December issues of IEEE - Nuclear Science Transactions for 1985, 1987, 1989, 1991, and the IEEE Workshop Record, 1993. Trends in SEE susceptibility (including soft errors and latchup) for state-of- are evaluated.
Estimating soft tissue thickness from light-tissue interactions––a simulation study
Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2013-01-01
Immobilization and marker-based motion tracking in radiation therapy often cause decreased patient comfort. However, the more comfortable alternative of optical surface tracking is highly inaccurate due to missing point-to-point correspondences between subsequent point clouds as well as elastic deformation of soft tissue. In this study, we present a proof of concept for measuring subcutaneous features with a laser scanner setup focusing on the skin thickness as additional input for high accuracy optical surface tracking. Using Monte-Carlo simulations for multi-layered tissue, we show that informative features can be extracted from the simulated tissue reflection by integrating intensities within concentric ROIs around the laser spot center. Training a regression model with a simulated data set identifies patterns that allow for predicting skin thickness with a root mean square error of down to 18 µm. Different approaches to compensate for varying observation angles were shown to yield errors still below 90 µm. Finally, this initial study provides a very promising proof of concept and encourages research towards a practical prototype. PMID:23847741
Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein
2016-01-01
Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.
NASA Astrophysics Data System (ADS)
Fulkerson, David E.
2010-02-01
This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.
NASA Technical Reports Server (NTRS)
Nitta, Nariaki
1988-01-01
Hard X-ray spectra in solar flares obtained by the broadband spectrometers aboard Hinotori and SMM are compared. Within the uncertainty brought about by assuming the typical energy of the background X-rays, spectra by the Hinotori spectrometer are usually consistent with those by the SMM spectrometer for flares in 1981. On the contrary, flares in 1982 persistently show 20-50-percent higher flux by Hinotori than by SMM. If this discrepancy is entirely attributable to errors in the calibration of energy ranges, the errors would be about 10 percent. Despite such a discrepancy in absolute flux, in the the decay phase of one flare, spectra revealed a hard X-ray component (probably a 'superhot' component) that could be explained neither by emission from a plasma at about 2 x 10 to the 7th K nor by a nonthermal power-law component. Imaging observations during this period show hard X-ray emission nearly cospatial with soft X-ray emission, in contrast with earlier times at which hard and soft X-rays come from different places.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Soft Vibrational Modes Predict Breaking Events during Force-Induced Protein Unfolding.
Habibi, Mona; Plotkin, Steven S; Rottler, Jörg
2018-02-06
We investigate the correlation between soft vibrational modes and unfolding events in simulated force spectroscopy of proteins. Unfolding trajectories are obtained from molecular dynamics simulations of a Gō model of a monomer of a mutant of superoxide dismutase 1 protein containing all heavy atoms in the protein, and a normal mode analysis is performed based on the anisotropic network model. We show that a softness map constructed from the superposition of the amplitudes of localized soft modes correlates with unfolding events at different stages of the unfolding process. Soft residues are up to eight times more likely to undergo disruption of native structure than the average amino acid. The memory of the softness map is retained for extensions of up to several nanometers, but decorrelates more rapidly during force drops. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Asha, S.; Ananth, A. Nimrodh; Jose, Sujin P.; Rajan, M. A. Jothi
2018-05-01
Reduced Graphene Oxide aerogels (A-RGO), functionalized with chitosan, were found to induce and/or accelerate the mineralization of hydroxyapatite. The functionalized chitosan acts as a soft interfacial template on the surface of A-RGO assisting the growth of hydroxyapatite particles. The mineralization on these soft aerogel networks was performed by soaking the aerogels in simulated body fluid, relative to time. Polymer-induced mineralization exhibited an ordered arrangement of hydroxyapatite particles on reduced graphene oxide aerogel networks with a higher crystalline index (IC) of 1.7, which mimics the natural bone formation indicating the importance of the polymeric interfacial template. These mineralized aerogels which mimic the structure and composition of natural bone exhibit relatively higher rate of cell proliferation, osteogenic differentiation and osteoid matrix formation proving it to be a potential scaffold for bone tissue regeneration.
Benham, Kevin; Hodyss, Robert; Fernández, Facundo M; Orlando, Thomas M
2016-11-01
We demonstrate the first application of laser-induced acoustic desorption (LIAD) and atmospheric pressure photoionization (APPI) as a mass spectrometric method for detecting low-polarity organics. This was accomplished using a Lyman-α (10.2 eV) photon generating microhollow cathode discharge (MHCD) microplasma photon source in conjunction with the addition of a gas-phase molecular dopant. This combination provided a soft desorption and a relatively soft ionization technique. Selected compounds analyzed include α-tocopherol, perylene, cholesterol, phenanthrene, phylloquinone, and squalene. Detectable surface concentrations as low as a few pmol per spot sampled were achievable using test molecules. The combination of LIAD and APPI provided a soft desorption and ionization technique that can allow detection of labile, low-polarity, structurally complex molecules over a wide mass range with minimal fragmentation. Graphical Abstract ᅟ.
Surgical management of the radiated chest wall and its complications
Clancy, Sharon L.; Erhunmwunsee, Loretta J.
2017-01-01
Synopsis Radiation to the chest wall is common before resection of tumors. History of radiation does not necessarily change the surgical approach of soft tissue coverage needed for reconstruction. Osteoradionecrosis can occur after radiation treatment, particularly after high dose radiation treatment. Radical resection and reconstruction is feasible and can be life saving. Soft tissue coverage using myocutaneous flap or omental flap is determined by the quality of soft tissue available and the status of the vascular pedicle supplying available myocutaneous flaps. Radiation induced sarcomas of the chest wall occur most commonly after radiation therapy for breast cancer. While angiosarcomas are the most common histology of radiation induced sarcoma, osteosarcoma, myosarcomas, rhabdomyosarcoma, and undifferentiated sarcomas also occur. The most effective treatment is surgical resection. Tumors not amenable to surgical resection are treated with chemotherapy with low response rates. PMID:28363372
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.
2010-01-01
An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402
Walden, Steven J; Evans, Sam L; Mulville, Jacqui
2017-01-01
The purpose of this study was to determine how the Vickers hardness (HV) of bone varies during soft tissue putrefaction. This has possible forensic applications, notably for determining the postmortem interval. Experimental porcine bone samples were decomposed in surface and burial deposition scenarios over a period of 6 months. Although the Vickers hardness varied widely, it was found that when transverse axial hardness was subtracted from longitudinal axial hardness, the difference showed correlations with three distinct phases of soft tissue putrefaction. The ratio of transverse axial hardness to longitudinal axial hardness showed a similar correlation. A difference of 10 or greater in HV with soft tissue present and signs of minimal decomposition, was associated with a decomposition period of 250 cumulative cooling degree days or less. A difference of 10 (+/- standard error of mean at a 95% confidence interval) or greater in HV associated with marked decomposition indicated a decomposition period of 1450 cumulative cooling degree days or more. A difference of -7 to +8 (+/- standard error of mean at a 95% confidence interval) was thus associated with 250 to 1450 cumulative cooling degree days' decomposition. The ratio of transverse axial HV to longitudinal HV, ranging from 2.42 to 1.54, is a more reliable indicator in this context and is preferable to using negative integers These differences may have potential as an indicator of postmortem interval and thus the time of body deposition in the forensic context. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Pointing error analysis of Risley-prism-based beam steering system.
Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng
2014-09-01
Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.
NASA Astrophysics Data System (ADS)
Li, Lei; Hu, Jianhao
2010-12-01
Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.
Wiesinger, Florian; Bylund, Mikael; Yang, Jaewon; Kaushik, Sandeep; Shanbhag, Dattesh; Ahn, Sangtae; Jonsson, Joakim H; Lundman, Josef A; Hope, Thomas; Nyholm, Tufve; Larson, Peder; Cozzini, Cristina
2018-02-18
To describe a method for converting Zero TE (ZTE) MR images into X-ray attenuation information in the form of pseudo-CT images and demonstrate its performance for (1) attenuation correction (AC) in PET/MR and (2) dose planning in MR-guided radiation therapy planning (RTP). Proton density-weighted ZTE images were acquired as input for MR-based pseudo-CT conversion, providing (1) efficient capture of short-lived bone signals, (2) flat soft-tissue contrast, and (3) fast and robust 3D MR imaging. After bias correction and normalization, the images were segmented into bone, soft-tissue, and air by means of thresholding and morphological refinements. Fixed Hounsfield replacement values were assigned for air (-1000 HU) and soft-tissue (+42 HU), whereas continuous linear mapping was used for bone. The obtained ZTE-derived pseudo-CT images accurately resembled the true CT images (i.e., Dice coefficient for bone overlap of 0.73 ± 0.08 and mean absolute error of 123 ± 25 HU evaluated over the whole head, including errors from residual registration mismatches in the neck and mouth regions). The linear bone mapping accounted for bone density variations. Averaged across five patients, ZTE-based AC demonstrated a PET error of -0.04 ± 1.68% relative to CT-based AC. Similarly, for RTP assessed in eight patients, the absolute dose difference over the target volume was found to be 0.23 ± 0.42%. The described method enables MR to pseudo-CT image conversion for the head in an accurate, robust, and fast manner without relying on anatomical prior knowledge. Potential applications include PET/MR-AC, and MR-guided RTP. © 2018 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Perry B.; Geyer, Amy; Borrego, David
Purpose: To investigate the benefits and limitations of patient-phantom matching for determining organ dose during fluoroscopy guided interventions. Methods: In this study, 27 CT datasets representing patients of different sizes and genders were contoured and converted into patient-specific computational models. Each model was matched, based on height and weight, to computational phantoms selected from the UF hybrid patient-dependent series. In order to investigate the influence of phantom type on patient organ dose, Monte Carlo methods were used to simulate two cardiac projections (PA/left lateral) and two abdominal projections (RAO/LPO). Organ dose conversion coefficients were then calculated for each patient-specific andmore » patient-dependent phantom and also for a reference stylized and reference hybrid phantom. The coefficients were subsequently analyzed for any correlation between patient-specificity and the accuracy of the dose estimate. Accuracy was quantified by calculating an absolute percent difference using the patient-specific dose conversion coefficients as the reference. Results: Patient-phantom matching was shown most beneficial for estimating the dose to heavy patients. In these cases, the improvement over using a reference stylized phantom ranged from approximately 50% to 120% for abdominal projections and for a reference hybrid phantom from 20% to 60% for all projections. For lighter individuals, patient-phantom matching was clearly superior to using a reference stylized phantom, but not significantly better than using a reference hybrid phantom for certain fields and projections. Conclusions: The results indicate two sources of error when patients are matched with phantoms: Anatomical error, which is inherent due to differences in organ size and location, and error attributed to differences in the total soft tissue attenuation. For small patients, differences in soft tissue attenuation are minimal and are exceeded by inherent anatomical differences. For large patients, difference in soft tissue attenuation can be large. In these cases, patient-phantom matching proves most effective as differences in soft tissue attenuation are mitigated. With increasing obesity rates, overweight patients will continue to make up a growing fraction of all patients undergoing medical imaging. Thus, having phantoms that better represent this population represents a considerable improvement over previous methods. In response to this study, additional phantoms representing heavier weight percentiles will be added to the UFHADM and UFHADF patient-dependent series.« less
Wall fluidization in two acts: from stiff to soft roughness.
Derzsi, Ladislav; Filippi, Daniele; Lulli, Matteo; Mistura, Giampaolo; Bernaschi, Massimo; Garstecki, Piotr; Sbragaglia, Mauro; Pierno, Matteo
2018-02-14
Fluidization of soft glassy materials (SGMs) in microfluidic channels is affected by the wall roughness in the form of microtexturing. When SGMs flow across microgrooves, their constituents are likely trapped within the grooves' gap, and the way they are released locally modifies the fluidization close to the walls. By leveraging a suitable combination of experiments and numerical simulations on concentrated emulsions (a model SGM), we quantitatively report the existence of two physically different scenarios. When the gap is large compared to the droplets in the emulsion, the droplets hit the solid obstacles and easily escape scrambling with their neighbors. Conversely, as the gap spacing is reduced, droplets get trapped inside, creating a "soft roughness" layer, i.e. a complementary series of deformable posts from which overlying droplets are in turn released. In both cases, the induced fluidization scales with the grooves' density, although with a reduced prefactor for narrow gaps, accounting for the softness of the roughness. Both scenarios are also well distinguished via the statistics of the droplets displacement field close to the walls, with large deviations induced by the surface roughness, depending on its stiffness.
Quality assurance of the international computerised 24 h dietary recall method (EPIC-Soft).
Crispim, Sandra P; Nicolas, Genevieve; Casagrande, Corinne; Knaze, Viktoria; Illner, Anne-Kathrin; Huybrechts, Inge; Slimani, Nadia
2014-02-01
The interview-administered 24 h dietary recall (24-HDR) EPIC-Soft® has a series of controls to guarantee the quality of dietary data across countries. These comprise all steps that are part of fieldwork preparation, data collection and data management; however, a complete characterisation of these quality controls is still lacking. The present paper describes in detail the quality controls applied in EPIC-Soft, which are, to a large extent, built on the basis of the EPIC-Soft error model and are present in three phases: (1) before, (2) during and (3) after the 24-HDR interviews. Quality controls for consistency and harmonisation are implemented before the interviews while preparing the seventy databases constituting an EPIC-Soft version (e.g. pre-defined and coded foods and recipes). During the interviews, EPIC-Soft uses a cognitive approach by helping the respondent to recall the dietary intake information in a stepwise manner and includes controls for consistency (e.g. probing questions) as well as for completeness of the collected data (e.g. system calculation for some unknown amounts). After the interviews, a series of controls can be applied by dietitians and data managers to further guarantee data quality. For example, the interview-specific 'note files' that were created to track any problems or missing information during the interviews can be checked to clarify the information initially provided. Overall, the quality controls employed in the EPIC-Soft methodology are not always perceivable, but prove to be of assistance for its overall standardisation and possibly for the accuracy of the collected data.
Differences in Train-induced Vibration between Hard Soil and Soft Soil
NASA Astrophysics Data System (ADS)
Noyori, M.; Yokoyama, H.
2017-12-01
Vibration and noise caused by running trains sometimes raises environmental issues. Train-induced vibration is caused by moving static and dynamic axle loads. To reduce the vibration, it is important to clarify the conditions under which the train-induced vibration increases. In this study, we clarified the differences in train-induced vibration between on hard soil and on soft soil using a numerical simulation method. The numerical simulation method we used is a combination of two analysis. The one is a coupled vibration analysis model of a running train, a track and a supporting structure. In the analysis, the excitation force of the viaduct slabs generated by a running train is computed. The other analysis is a three-dimensional vibration analysis model of a supporting structure and the ground into which the excitation force computed by the former analysis is input. As a result of the numerical simulation, the ground vibration in the area not more than 25m from the center of the viaduct is larger under the soft soil condition than that under the hard soil condition in almost all frequency ranges. On the other hand, the ground vibration of 40 and 50Hz at a point 50m from the center of the viaduct under the hard soil condition is larger than that under the soft soil condition. These are consistent with the result of the two-dimensional FEM based on a ground model alone. Thus, we concluded that these results are obtained from not the effects of the running train but the vibration characteristics of the ground.
Temperature Dependence of Faraday Effect-Induced Bias Error in a Fiber Optic Gyroscope
Li, Xuyou; Guang, Xingxing; Xu, Zhenlong; Li, Guangchun
2017-01-01
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environments, such as magnetic field and temperature field variation, is necessary for its practical applications. This paper presents an investigation of Faraday effect-induced bias error of IFOG under varying temperature. Jones matrix method is utilized to formulize the temperature dependence of Faraday effect-induced bias error. Theoretical results show that the Faraday effect-induced bias error changes with the temperature in the non-skeleton polarization maintaining (PM) fiber coil. This phenomenon is caused by the temperature dependence of linear birefringence and Verdet constant of PM fiber. Particularly, Faraday effect-induced bias errors of two polarizations always have opposite signs that can be compensated optically regardless of the changes of the temperature. Two experiments with a 1000 m non-skeleton PM fiber coil are performed, and the experimental results support these theoretical predictions. This study is promising for improving the bias stability of IFOG. PMID:28880203
Temperature Dependence of Faraday Effect-Induced Bias Error in a Fiber Optic Gyroscope.
Li, Xuyou; Liu, Pan; Guang, Xingxing; Xu, Zhenlong; Guan, Lianwu; Li, Guangchun
2017-09-07
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environments, such as magnetic field and temperature field variation, is necessary for its practical applications. This paper presents an investigation of Faraday effect-induced bias error of IFOG under varying temperature. Jones matrix method is utilized to formulize the temperature dependence of Faraday effect-induced bias error. Theoretical results show that the Faraday effect-induced bias error changes with the temperature in the non-skeleton polarization maintaining (PM) fiber coil. This phenomenon is caused by the temperature dependence of linear birefringence and Verdet constant of PM fiber. Particularly, Faraday effect-induced bias errors of two polarizations always have opposite signs that can be compensated optically regardless of the changes of the temperature. Two experiments with a 1000 m non-skeleton PM fiber coil are performed, and the experimental results support these theoretical predictions. This study is promising for improving the bias stability of IFOG.
The Columbia University proton-induced soft x-ray microbeam.
Harken, Andrew D; Randers-Pehrson, Gerhard; Johnson, Gary W; Brenner, David J
2011-09-15
A soft x-ray microbeam using proton-induced x-ray emission (PIXE) of characteristic titanium (K(α) 4.5 keV) as the x-ray source has been developed at the Radiological Research Accelerator Facility (RARAF) at Columbia University. The proton beam is focused to a 120 μm × 50 μm spot on the titanium target using an electrostatic quadrupole quadruplet previously used for the charged particle microbeam studies at RARAF. The proton induced x-rays from this spot project a 50 μm round x-ray generation spot into the vertical direction. The x-rays are focused to a spot size of 5 μm in diameter using a Fresnel zone plate. The x-rays have an attenuation length of (1/e length of ~145 μm) allowing more consistent dose delivery across the depth of a single cell layer and penetration into tissue samples than previous ultra soft x-ray systems. The irradiation end station is based on our previous design to allow quick comparison to charged particle experiments and for mixed irradiation experiments.
Biswas, D P; Tran, P A; Tallon, C; O'Connor, A J
2017-02-01
In this paper, a novel foaming methodology consisting of turbulent mixing and thermally induced phase separation (TIPS) was used to generate scaffolds for tissue engineering. Air bubbles were mechanically introduced into a chitosan solution which forms the continuous polymer/liquid phase in the foam created. The air bubbles entrained in the foam act as a template for the macroporous architecture of the final scaffolds. Wet foams were crosslinked via glutaraldehyde and frozen at -20 °C to induce TIPS in order to limit film drainage, bubble coalescence and Ostwald ripening. The effects of production parameters, including mixing speed, surfactant concentration and chitosan concentration, on foaming are explored. Using this method, hydrogel scaffolds were successfully produced with up to 80% porosity, average pore sizes of 120 μm and readily tuneable compressive modulus in the range of 2.6 to 25 kPa relevant to soft tissue engineering applications. These scaffolds supported 3T3 fibroblast cell proliferation and penetration and therefore show significant potential for application in soft tissue engineering.
Laloš, Jernej; Gregorčič, Peter; Jezeršek, Matija
2018-01-01
We present an optical study of elastic wave propagation inside skin phantoms consisting of agar gel as induced by an Er:YAG (wavelength of 2.94 μm) laser pulse. A laser-beam-deflection probe is used to measure ultrasonic propagation and a high-speed camera is used to record displacements in ablation-induced elastic transients. These measurements are further analyzed with a custom developed image recognition algorithm utilizing the methods of particle image velocimetry and spline interpolation to determine point trajectories, material displacement and strain during the passing of the transients. The results indicate that the ablation-induced elastic waves propagate with a velocity of 1 m/s and amplitudes of 0.1 mm. Compared to them, the measured velocities of ultrasonic waves are much higher, within the range of 1.42–1.51 km/s, while their amplitudes are three orders of magnitude smaller. This proves that the agar gel may be used as a rudimental skin and soft tissue substitute in biomedical research, since its polymeric structure reproduces adequate soft-solid properties and its transparency for visible light makes it convenient to study with optical instruments. The results presented provide an insight into the distribution of laser-induced elastic transients in soft tissue phantoms, while the experimental approach serves as a foundation for further research of laser-induced mechanical effects deeper in the tissue. PMID:29675327
Laloš, Jernej; Gregorčič, Peter; Jezeršek, Matija
2018-04-01
We present an optical study of elastic wave propagation inside skin phantoms consisting of agar gel as induced by an Er:YAG (wavelength of 2.94 μm) laser pulse. A laser-beam-deflection probe is used to measure ultrasonic propagation and a high-speed camera is used to record displacements in ablation-induced elastic transients. These measurements are further analyzed with a custom developed image recognition algorithm utilizing the methods of particle image velocimetry and spline interpolation to determine point trajectories, material displacement and strain during the passing of the transients. The results indicate that the ablation-induced elastic waves propagate with a velocity of 1 m/s and amplitudes of 0.1 mm. Compared to them, the measured velocities of ultrasonic waves are much higher, within the range of 1.42-1.51 km/s, while their amplitudes are three orders of magnitude smaller. This proves that the agar gel may be used as a rudimental skin and soft tissue substitute in biomedical research, since its polymeric structure reproduces adequate soft-solid properties and its transparency for visible light makes it convenient to study with optical instruments. The results presented provide an insight into the distribution of laser-induced elastic transients in soft tissue phantoms, while the experimental approach serves as a foundation for further research of laser-induced mechanical effects deeper in the tissue.
Rossi X-Ray Timing Explorer All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, Donald A.; Bradt, Hale V.; Levine, Alan M.
1999-07-01
The fourth unambiguously identified soft gamma repeater (SGR), SGR 1627-41, was discovered with the BATSE instrument on 1998 June 15. Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6° segment of a narrow (19") annulus. We present two bursts from this source observed by the All-Sky Monitor (ASM) on the Rossi X-Ray Timing Explorer. We use the ASM data to further constrain the source location to a 5' long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3 arcmin of the supernova remnant G337.0-0.1. The probability that a supernova remnant would fall so close to the error box purely by chance is ~5%.
RXTE All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, D. A.; Bradt, H. V.; Levine, A. M.
1999-09-01
The fourth unambiguously identified Soft Gamma Repeater (SGR), SGR 1627--41, was discovered with the BATSE instrument on 1998 June 15 (Kouveliotou et al. 1998). Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6(deg) segment of a narrow (19('') ) annulus (Hurley et al. 1999; Woods et al. 1998). We report on two bursts from this source observed by the All-Sky Monitor (ASM) on RXTE. We use the ASM data to further constrain the source location to a 5(') long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3(') of the supernova remnant (SNR) G337.0--0.1. The probability that a SNR would fall so close to the error box purely by chance is ~ 5%.
Kalyani, Ajay Kumar; V, Lalitha K; James, Ajit R; Fitch, Andy; Ranjan, Rajeev
2015-02-25
A 'powder-poling' technique was developed to study electric field induced structural transformations in ferroelectrics exhibiting a morphotropic phase boundary (MPB). The technique was employed on soft PZT exhibiting a large longitudinal piezoelectric response (d(33) ∼ 650 pC N(-1)). It was found that electric poling brings about a considerable degree of irreversible tetragonal to monoclinic transformation. The same transformation was achieved after subjecting the specimen to mechanical stress, which suggests an equivalence of stress and electric field with regard to the structural mechanism in MPB compositions. The electric field induced structural transformation was also found to be accompanied by a decrease in the spatial coherence of polarization.
Propulsion of rotationally actuated soft magnetic microswimmers
NASA Astrophysics Data System (ADS)
Samsami, Kiarash; Mirbagheri, Seyed Amir; Meshkati, Farshad; Fu, Henry
2017-11-01
Microrobotic swimmers have been the subject of many studies recently because of their possible biomedical applications such as drug delivery and micro manipulation. We examine rigid magnetic microrobots that are propelled by rotation induced by a rotating magnetic field, thought to be the most promising class of microrobots. Previous studies have considered ferromagnetic swimmers with permanent magnetizations and paramagnetic swimmers, but many experimental realizations are in fact soft magnets. Here we investigate how soft magnetic swimmers differ from ferromagnetic and paramagnetic swimmers. We specifically investigate the behavior of step-out frequencies, velocity-frequency response, and the stability and multiplicity of stable swimming modes for microrobots with nonmagnetic helical tails and ellipsoidal soft magnetic heads.
Anomalous annealing of floating gate errors due to heavy ion irradiation
NASA Astrophysics Data System (ADS)
Yin, Yanan; Liu, Jie; Sun, Youmei; Hou, Mingdong; Liu, Tianqi; Ye, Bing; Ji, Qinggang; Luo, Jie; Zhao, Peixiong
2018-03-01
Using the heavy ions provided by the Heavy Ion Research Facility in Lanzhou (HIRFL), the annealing of heavy-ion induced floating gate (FG) errors in 34 nm and 25 nm NAND Flash memories has been studied. The single event upset (SEU) cross section of FG and the evolution of the errors after irradiation depending on the ion linear energy transfer (LET) values, data pattern and feature size of the device are presented. Different rates of annealing for different ion LET and different pattern are observed in 34 nm and 25 nm memories. The variation of the percentage of different error patterns in 34 nm and 25 nm memories with annealing time shows that the annealing of FG errors induced by heavy-ion in memories will mainly take place in the cells directly hit under low LET ion exposure and other cells affected by heavy ions when the ion LET is higher. The influence of Multiple Cell Upsets (MCUs) on the annealing of FG errors is analyzed. MCUs with high error multiplicity which account for the majority of the errors can induce a large percentage of annealed errors.
Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.
2004-01-01
Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.
Single Event Effect Testing of the Micron MT46V128M8
NASA Technical Reports Server (NTRS)
Stansberry, Scott; Campola, Michael; Wilcox, Ted; Seidleck, Christina; Phan, Anthony
2017-01-01
The Micron MT46V128M8 was tested for single event effects (SEE) at the Texas AM University Cyclotron Facility (TAMU) in June of 2017. Testing revealed a sensitivity to device hang-ups classified as single event functional interrupts (SEFI) and possible soft data errors classified as single event upsets (SEU).
Micromagnetic Study of Perpendicular Magnetic Recording Media
NASA Astrophysics Data System (ADS)
Dong, Yan
With increasing areal density in magnetic recording systems, perpendicular recording has successfully replaced longitudinal recording to mitigate the superparamagnetic limit. The extensive theoretical and experimental research associated with perpendicular magnetic recording media has contributed significantly to improving magnetic recording performance. Micromagnetic studies on perpendicular recording media, including aspects of the design of hybrid soft underlayers, media noise properties, inter-grain exchange characterization and ultra-high density bit patterned media recording, are presented in this dissertation. To improve the writability of recording media, one needs to reduce the head-to-keeper spacing while maintaining a good texture growth for the recording layer. A hybrid soft underlayer, consisting of a thin crystalline soft underlayer stacked above a non-magnetic seed layer and a conventional amorphous soft underlayer, provides an alternative approach for reducing the effective head-to-keeper spacing in perpendicular recording. Micromagnetic simulations indicate that the media using a hybrid soft underlayer helps enhance the effective field and the field gradient in comparison with conventional media that uses only an amorphous soft underlayer. The hybrid soft underlayer can support a thicker non-magnetic seed layer yet achieve an equivalent or better effective field and field gradient. A noise plateau for intermediate recording densities is observed for a recording layer of typical magnetization. Medium noise characteristics and transition jitter in perpendicular magnetic recording are explored using micromagnetic simulation. The plateau is replaced by a normal linear dependence of noise on recording density for a low magnetization recording layer. We show analytically that a source of the plateau is similar to that producing the Non-Linear-Transition-Shift of signal. In particular, magnetostatic effects are predicted to produce positive correlation of jitter and thus negative correlation of noise at the densities associated with the plateau. One focus for developing perpendicular recording media is on how to extract intergranular exchange coupling and intrinsic anisotropy field dispersion. A micromagnetic numerical technique is developed to effectively separate the effects of intergranular exchange coupling and anisotropy dispersion by finding their correlation to differentiated M-H curves with different initial magnetization states, even in the presence of thermal fluctuation. The validity of this method is investigated with a series of intergranular exchange couplings and anisotropy dispersions for different media thickness. This characterization method allows for an experimental measurement employing a vibrating sample magnetometer (VSM). Bit patterned media have been suggested to extend areal density beyond 1 Tbit/in2. The feasibility of 4 Tbit/in2 bit patterned recording is determined by aspects of write head design and media fabrication, and is estimated by the bit error rate. Micromagnetic specifications including 2.3:1 BAR bit patterned exchange coupled composite media, trailing shield, and side shields are proposed to meet the requirement of 3x10 -4 bit error rate, 4 nm fly height, 5% switching field distribution, 5% timing and 5% jitter errors for 4 Tbit/in2 bit-patterned recording. Demagnetizing field distribution is examined by studying the shielding effect of the side shields on the stray field from the neighboring dots. For recording self-assembled bit-patterned media, the head design writes two staggered tracks in a single pass and has maximum perpendicular field gradients of 580 Oe/nm along the down-track direction and 476 Oe/nm along the cross-track direction. The geometry demanded by self-assembly reduces recording density to 2.9 Tbit/in 2.
Estimation of Fetal Weight during Labor: Still a Challenge.
Barros, Joana Goulão; Reis, Inês; Pereira, Isabel; Clode, Nuno; Graça, Luís M
2016-01-01
To evaluate the accuracy of fetal weight prediction by ultrasonography labor employing a formula including the linear measurements of femur length (FL) and mid-thigh soft-tissue thickness (STT). We conducted a prospective study involving singleton uncomplicated term pregnancies within 48 hours of delivery. Only pregnancies with a cephalic fetus admitted in the labor ward for elective cesarean section, induction of labor or spontaneous labor were included. We excluded all non-Caucasian women, the ones previously diagnosed with gestational diabetes and the ones with evidence of ruptured membranes. Fetal weight estimates were calculated using a previously proposed formula [estimated fetal weight = 1687.47 + (54.1 x FL) + (76.68 x STT). The relationship between actual birth weight and estimated fetal weight was analyzed using Pearson's correlation. The formula's performance was assessed by calculating the signed and absolute errors. Mean weight difference and signed percentage error were calculated for birth weight divided into three subgroups: < 3000 g; 3000-4000 g; and > 4000 g. We included for analysis 145 cases and found a significant, yet low, linear relationship between birth weight and estimated fetal weight (p < 0.001; R2 = 0.197) with an absolute mean error of 10.6%. The lowest mean percentage error (0.3%) corresponded to the subgroup with birth weight between 3000 g and 4000 g. This study demonstrates a poor correlation between actual birth weight and the estimated fetal weight using a formula based on femur length and mid-thigh soft-tissue thickness, both linear parameters. Although avoidance of circumferential ultrasound measurements might prove to be beneficial, it is still yet to be found a fetal estimation formula that can be both accurate and simple to perform.
Soft-type trap-induced degradation of MoS2 field effect transistors.
Cho, Young-Hoon; Ryu, Min-Yeul; Lee, Kook Jin; Park, So Jeong; Choi, Jun Hee; Lee, Byung-Chul; Kim, Wungyeon; Kim, Gyu-Tae
2018-06-01
The practical applicability of electronic devices is largely determined by the reliability of field effect transistors (FETs), necessitating constant searches for new and better-performing semiconductors. We investigated the stress-induced degradation of MoS 2 multilayer FETs, revealing a steady decrease of drain current by 56% from the initial value after 30 min. The drain current recovers to the initial state when the transistor is completely turned off, indicating the roles of soft-traps in the apparent degradation. The noise current power spectrum follows the model of carrier number fluctuation-correlated mobility fluctuation (CNF-CMF) regardless of stress time. However, the reduction of the drain current was well fitted to the increase of the trap density based on the CNF-CMF model, attributing the presence of the soft-type traps of dielectric oxides to the degradation of the MoS 2 FETs.
Soft-type trap-induced degradation of MoS2 field effect transistors
NASA Astrophysics Data System (ADS)
Cho, Young-Hoon; Ryu, Min-Yeul; Lee, Kook Jin; Park, So Jeong; Choi, Jun Hee; Lee, Byung-Chul; Kim, Wungyeon; Kim, Gyu-Tae
2018-06-01
The practical applicability of electronic devices is largely determined by the reliability of field effect transistors (FETs), necessitating constant searches for new and better-performing semiconductors. We investigated the stress-induced degradation of MoS2 multilayer FETs, revealing a steady decrease of drain current by 56% from the initial value after 30 min. The drain current recovers to the initial state when the transistor is completely turned off, indicating the roles of soft-traps in the apparent degradation. The noise current power spectrum follows the model of carrier number fluctuation–correlated mobility fluctuation (CNF–CMF) regardless of stress time. However, the reduction of the drain current was well fitted to the increase of the trap density based on the CNF–CMF model, attributing the presence of the soft-type traps of dielectric oxides to the degradation of the MoS2 FETs.
Elastohydrodynamic Lift at a Soft Wall
NASA Astrophysics Data System (ADS)
Davies, Heather S.; Débarre, Delphine; El Amri, Nouha; Verdier, Claude; Richter, Ralf P.; Bureau, Lionel
2018-05-01
We study experimentally the motion of nondeformable microbeads in a linear shear flow close to a wall bearing a thin and soft polymer layer. Combining microfluidics and 3D optical tracking, we demonstrate that the steady-state bead-to-surface distance increases with the flow strength. Moreover, such lift is shown to result from flow-induced deformations of the layer, in quantitative agreement with theoretical predictions from elastohydrodynamics. This study thus provides the first experimental evidence of "soft lubrication" at play at small scale, in a system relevant, for example, to the physics of blood microcirculation.
ROSAT X-Ray Observation of the Second Error Box for SGR 1900+14
NASA Technical Reports Server (NTRS)
Li, P.; Hurley, K.; Vrba, F.; Kouveliotou, C.; Meegan, C. A.; Fishman, G. J.; Kulkarni, S.; Frail, D.
1997-01-01
The positions of the two error boxes for the soft gamma repeater (SGR) 1900+14 were determined by the "network synthesis" method, which employs observations by the Ulysses gamma-ray burst and CGRO BATSE instruments. The location of the first error box has been observed at optical, infrared, and X-ray wavelengths, resulting in the discovery of a ROSAT X-ray point source and a curious double infrared source. We have recently used the ROSAT HRI to observe the second error box to complete the counterpart search. A total of six X-ray sources were identified within the field of view. None of them falls within the network synthesis error box, and a 3 sigma upper limit to any X-ray counterpart was estimated to be 6.35 x 10(exp -14) ergs/sq cm/s. The closest source is approximately 3 min. away, and has an estimated unabsorbed flux of 1.5 x 10(exp -12) ergs/sq cm/s. Unlike the first error box, there is no supernova remnant near the second error box. The closest one, G43.9+1.6, lies approximately 2.dg6 away. For these reasons, we believe that the first error box is more likely to be the correct one.
Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging
Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C.
2017-01-01
Background To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Methods Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Results Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. Conclusions A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed. PMID:28516049
Radiation-Hardened Solid-State Drive
NASA Technical Reports Server (NTRS)
Sheldon, Douglas J.
2010-01-01
A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.
NASA Technical Reports Server (NTRS)
Spector, E.; LeBlanc, A.; Shackelford, L.
1995-01-01
This study reports on the short-term in vivo precision and absolute measurements of three combinations of whole-body scan modes and analysis software using a Hologic QDR 2000 dual-energy X-ray densitometer. A group of 21 normal, healthy volunteers (11 male and 10 female) were scanned six times, receiving one pencil-beam and one array whole-body scan on three occasions approximately 1 week apart. The following combinations of scan modes and analysis software were used: pencil-beam scans analyzed with Hologic's standard whole-body software (PB scans); the same pencil-beam analyzed with Hologic's newer "enhanced" software (EPB scans); and array scans analyzed with the enhanced software (EA scans). Precision values (% coefficient of variation, %CV) were calculated for whole-body and regional bone mineral content (BMC), bone mineral density (BMD), fat mass, lean mass, %fat and total mass. In general, there was no significant difference among the three scan types with respect to short-term precision of BMD and only slight differences in the precision of BMC. Precision of BMC and BMD for all three scan types was excellent: < 1% CV for whole-body values, with most regional values in the 1%-2% range. Pencil-beam scans demonstrated significantly better soft tissue precision than did array scans. Precision errors for whole-body lean mass were: 0.9% (PB), 1.1% (EPB) and 1.9% (EA). Precision errors for whole-body fat mass were: 1.7% (PB), 2.4% (EPB) and 5.6% (EA). EPB precision errors were slightly higher than PB precision errors for lean, fat and %fat measurements of all regions except the head, although these differences were significant only for the fat and % fat of the arms and legs. In addition EPB precision values exhibited greater individual variability than PB precision values. Finally, absolute values of bone and soft tissue were compared among the three combinations of scan and analysis modes. BMC, BMD, fat mass, %fat and lean mass were significantly different between PB scans and either of the EPB or EA scans. Differences were as large as 20%-25% for certain regional fat and BMD measurements. Additional work may be needed to examine the relative accuracy of the scan mode/software combinations and to identify reasons for the differences in soft tissue precision with the array whole-body scan mode.
Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors.
Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter
2016-08-24
Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design.
Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors
Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter
2016-01-01
Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design. PMID:27563908
Li, Lingyun; Zhang, Fuming; Hu, Min; Ren, Fuji; Chi, Lianli; Linhardt, Robert J.
2016-01-01
Low molecular weight heparins are complex polycomponent drugs that have recently become amenable to top-down analysis using liquid chromatography-mass spectrometry. Even using open source deconvolution software, DeconTools, and automatic structural assignment software, GlycReSoft, the comparison of two or more low molecular weight heparins is extremely time-consuming, taking about a week for an expert analyst and provides no guarantee of accuracy. Efficient data processing tools are required to improve analysis. This study uses the programming language of Microsoft Excel™ Visual Basic for Applications to extend its standard functionality for macro functions and specific mathematical modules for mass spectrometric data processing. The program developed enables the comparison of top-down analytical glycomics data on two or more low molecular weight heparins. The current study describes a new program, GlycCompSoft, which has a low error rate with good time efficiency in the automatic processing of large data sets. The experimental results based on three lots of Lovenox®, Clexane® and three generic enoxaparin samples show that the run time of GlycCompSoft decreases from 11 to 2 seconds when the data processed decreases from 18000 to 1500 rows. PMID:27942011
Akbarzadeh, A; Ay, M R; Ahmadian, A; Alam, N Riahi; Zaidi, H
2013-02-01
Hybrid PET/MRI presents many advantages in comparison with its counterpart PET/CT in terms of improved soft-tissue contrast, decrease in radiation exposure, and truly simultaneous and multi-parametric imaging capabilities. However, the lack of well-established methodology for MR-based attenuation correction is hampering further development and wider acceptance of this technology. We assess the impact of ignoring bone attenuation and using different tissue classes for generation of the attenuation map on the accuracy of attenuation correction of PET data. This work was performed using simulation studies based on the XCAT phantom and clinical input data. For the latter, PET and CT images of patients were used as input for the analytic simulation model using realistic activity distributions where CT-based attenuation correction was utilized as reference for comparison. For both phantom and clinical studies, the reference attenuation map was classified into various numbers of tissue classes to produce three (air, soft tissue and lung), four (air, lungs, soft tissue and cortical bones) and five (air, lungs, soft tissue, cortical bones and spongeous bones) class attenuation maps. The phantom studies demonstrated that ignoring bone increases the relative error by up to 6.8% in the body and up to 31.0% for bony regions. Likewise, the simulated clinical studies showed that the mean relative error reached 15% for lesions located in the body and 30.7% for lesions located in bones, when neglecting bones. These results demonstrate an underestimation of about 30% of tracer uptake when neglecting bone, which in turn imposes substantial loss of quantitative accuracy for PET images produced by hybrid PET/MRI systems. Considering bones in the attenuation map will considerably improve the accuracy of MR-guided attenuation correction in hybrid PET/MR to enable quantitative PET imaging on hybrid PET/MR technologies.
Lee, Dae-Hee; Park, Sung-Chul; Park, Hyung-Joon; Han, Seung-Beom
2016-12-01
Open-wedge high tibial osteotomy (HTO) cannot always accurately correct limb alignment, resulting in under- or over-correction. This study assessed the relationship between soft tissue laxity of the knee joint and alignment correction in open-wedge HTO. This prospective study involved 85 patients (86 knees) undergoing open-wedge HTO for primary medial osteoarthritis. The mechanical axis (MA), weight-bearing line (WBL) ratio, and joint line convergence angle (JLCA) were measured on radiographs preoperatively and after 6 months, and the differences between the pre- and post-surgery values were calculated. Post-operative WBL ratios of 57-67 % were classified as acceptable correction. WBL ratios <57 and >67 % were classified as under- and over-corrections, respectively. Preoperative JLCA correlated positively with differences in MA (r = 0.358, P = 0.001) and WBL ratio (P = 0.003). Difference in JLCA showed a stronger correlation than preoperative JLCA with differences in MA (P < 0.001) and WBL ratio (P < 0.001). Difference in JLCA was the only predictor of both difference in MA (P < 0.001) and difference in WBL ratio (P < 0.001). The difference between pre- and post-operative JLCA differed significantly between the under-correction, acceptable-correction, and over-correction groups (P = 0.033). Preoperative JLCA, however, did not differ significantly between the three groups. Neither preoperative JLCA nor difference in JLCA correlated with change in posterior slope. Preoperative degree of soft tissue laxity in the knee joint was related to the degree of alignment correction, but not to alignment correction error, in open-wedge HTO. Change in soft tissue laxity around the knee from before to after open-wedge HTO correlated with both correction amount and correction error. Therefore, a too large change in JLCA from before to after open-wedge osteotomy may be due to an overly large reduction in JLCA following osteotomy, suggesting alignment over-correction during surgery. II.
3D microwave tomography of the breast using prior anatomical information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golnabi, Amir H., E-mail: golnabia@montclair.edu; Meaney, Paul M.; Paulsen, Keith D.
2016-04-15
Purpose: The authors have developed a new 3D breast image reconstruction technique that utilizes the soft tissue spatial resolution of magnetic resonance imaging (MRI) and integrates the dielectric property differentiation from microwave imaging to produce a dual modality approach with the goal of augmenting the specificity of MR imaging, possibly without the need for nonspecific contrast agents. The integration is performed through the application of a soft prior regularization which imports segmented geometric meshes generated from MR exams and uses it to constrain the microwave tomography algorithm to recover nearly uniform property distributions within segmented regions with sharp delineation betweenmore » these internal subzones. Methods: Previous investigations have demonstrated that this approach is effective in 2D simulation and phantom experiments and also in clinical exams. The current study extends the algorithm to 3D and provides a thorough analysis of the sensitivity and robustness to misalignment errors in size and location between the spatial prior information and the actual data. Results: Image results in 3D were not strongly dependent on reconstruction mesh density, and the changes of less than 30% in recovered property values arose from variations of more than 125% in target region size—an outcome which was more robust than in 2D. Similarly, changes of less than 13% occurred in the 3D image results from variations in target location of nearly 90% of the inclusion size. Permittivity and conductivity errors were about 5 times and 2 times smaller, respectively, with the 3D spatial prior algorithm in actual phantom experiments than those which occurred without priors. Conclusions: The presented study confirms that the incorporation of structural information in the form of a soft constraint can considerably improve the accuracy of the property estimates in predefined regions of interest. These findings are encouraging and establish a strong foundation for using the soft prior technique in clinical studies, where their microwave imaging system and MRI can simultaneously collect breast exam data in patients.« less
NASA Astrophysics Data System (ADS)
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
Soft-tissue reactions following irradiation of primary brain and pituitary tumors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baglan, R.J.; Marks, J.E.
1981-04-01
One hundred and ninety-nine patients who received radiation therapy for a primary brain or pituitary tumor were studied for radiation-induced soft-tissue reactions of the cranium, scalp, ears and jaw. The frequency of these reactions was studied as a function of: the radiation dose 5 mm below the skin surface, dose distribution, field size and fraction size. Forty percent of patients had complete and permanent epilation, while 21% had some other soft-tissue complication, including: scalp swelling-6%, external otitis-6%, otitis media-5%, ear swelling-4%, etc. The frequency of soft-tissue reactions correlates directly with the radiation dose at 5 mm below the skin surface.more » Patients treated with small portals (<70 cm/sup 2/) had few soft-tissue reactions. The dose to superficial tissues, and hence the frequency of soft-tissue reactions can be reduced by: (1) using high-energy megavoltage beams; (2) using equal loading of beams; and (3) possibly avoiding the use of electron beams.« less
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Self-sensing of dielectric elastomer actuator enhanced by artificial neural network
NASA Astrophysics Data System (ADS)
Ye, Zhihang; Chen, Zheng
2017-09-01
Dielectric elastomer (DE) is a type of soft actuating material, the shape of which can be changed under electrical voltage stimuli. DE materials have promising usage in future’s soft actuators and sensors, such as soft robotics, energy harvesters, and wearable sensors. In this paper, a stripe DE actuator with integrated sensing capability is designed, fabricated, and characterized. Since the strip actuator can be approximated as a compliant capacitor, it is possible to detect the actuator’s displacement by analyzing the actuator’s impedance change. An integrated sensing scheme that adds a high frequency probing signal into actuation signal is developed. Electrical impedance changes in the probing signal are extracted by fast Fourier transform algorithm, and nonlinear data fitting methods involving artificial neural network are implemented to detect the actuator’s displacement. A series of experiments show that by improving data processing and analyzing methods, the integrated sensing method can achieve error level of lower than 1%.
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-08-06
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.
A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-01-01
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Tsai, Yi-Ju; Powers, Christopher M
2013-01-01
Theoretically, a shoe that provides less friction could result in a greater slip distance and foot slipping velocity, thereby increasing the likelihood of falling. The purpose of this study was to investigate the effects of sole hardness on the probability of slip-induced falls. Forty young adults were randomized into a hard or a soft sole shoe group, and tested under both nonslippery and slippery floor conditions using a motion analysis system. The proportions of fall events in the hard- and soft-soled shoe groups were not statistically different. No differences were observed between shoe groups for average slip distance, peak and average heel velocity, and center of mass slipping velocity. A strong association was found between slip distance and the fall probability. Our results demonstrate that the probability of a slip-induced fall was not influenced by shoe hardness. Once a slip is induced, slip distance was the primary predictor of a slip-induced fall. © 2012 American Academy of Forensic Sciences.
Responsive and Adaptive Micro Wrinkles on Organic-inorganic Hybrid Materials.
Takahashi, Masahide
2018-04-24
A buckling induced wrinkling is a general phenomenon in daily life, which is induced by mechanical instability at the interface of multi-layered systems. Variety of applications have been proposed for wrinkles in nano to micrometer periodicity on the surface of soft materials. In recent decades, researchers are trying to use wrinkles for variety of sophisticated applications such as micro pattern fabrication, control of wettability, templating/directing substrate for elongated nano materials or virus, size-selective adsorption/desorption of functional objects, cells or microorganisms, delamination induced material fabrication such as micro-rolls, substrates for stretchable electronics, valves for microfluidic devices and soft actuators. Herein, recent advances on the fabrication and application of micro-wrinkles are reviewed. © 2018 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Groomed jets in heavy-ion collisions: sensitivity to medium-induced bremsstrahlung
NASA Astrophysics Data System (ADS)
Mehtar-Tani, Yacine; Tywoniuk, Konrad
2017-04-01
We argue that contemporary jet substructure techniques might facilitate a more direct measurement of hard medium-induced gluon bremsstrahlung in heavy-ion collisions, and focus specifically on the "soft drop declustering" procedure that singles out the two leading jet substructures. Assuming coherent jet energy loss, we find an enhancement of the distribution of the energy fractions shared by the two substructures at small subjet energy caused by hard medium-induced gluon radiation. Departures from this approximation are discussed, in particular, the effects of colour decoherence and the contamination of the grooming procedure by soft background. Finally, we propose a complementary observable, that is the ratio of the two-pronged probability in Pb-Pb to proton-proton collisions and discuss its sensitivity to various energy loss mechanisms.
Wind tunnel tests of main girder with Π-shaped cross section
NASA Astrophysics Data System (ADS)
Guo, Junfeng; Hong, Chengjing; Zheng, Shixiong; Zhu, Jinbo
2017-10-01
The wind-resistant performance of a cable stayed bridge with IT-shaped girder was investigated by means of wind tunnel tests. Aerodynamic coefficients experiments and wind-induced vibration experiments with a sectional model a geometry scale of l to 60 were conducted. The results have shown that this kind of girder has the necessary condition for aerodynamic stability. Soft flutter of the main girder is a coupled two-degree-of-freedom torsional-bending vibration with single frequency. The amplitude of soft flutter follows a normal distribution, and the amplitude range varies with wind speed and angle of attack. The bridge deck auxiliary facilities can not only improve the critical soft flutter velocity, but also reduce the soft flutter amplitude and the amplitude growth rate.
Soft tissue tumors induced by monomeric {sup 239}Pu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, R.D.; Angus, W.; Taylor, G.N.
1995-10-01
Individual records of soft tissue tumor occurrence (lifetime incidence) among 236 beagles injected with {sup 239}Pu citrate as young adults and 131 comparable control beagles given no radioactivity enabled us to analyze the possible effects on soft tissue tumor induction resulting from internal exposure to {sup 239}Pu. A significant trend was identified in the proportion of animals having malignant liver tumors with increasing radiation dose from {sup 239}. There was also a significant difference in the relative numbers of both malignant liver tumors (18.1 expected, 66 observed). Malignant tumors of the mouth, pancreas, and skin were more frequent among controlsmore » than among the dogs given {sup 239}Pu as well as tumors (malignant plus benign) of the mouth, pancreas, testis, and vagina. For all other tumor sites or types, there was no significant difference for both malignant and all (malignant plus benign) tumors. Mammary tumor occurrence appeared not to be associated with {sup 239}Pu incorporation. We conclude that the only soft-tissue neoplasia induced by the intake of {sup 239}Pu directly into blood is probably a liver tumor. 20 refs., 6 tabs.« less
Pals, Justin A; Wagner, Elizabeth D; Plewa, Michael J; Xia, Menghang; Attene-Ramos, Matias S
2017-08-01
Haloacetamides (HAMs) are cytotoxic, genotoxic, and mutagenic byproducts of drinking water disinfection. They are soft electrophilic compounds that form covalent bonds with the free thiol/thiolate in cysteine residues through an S N 2 reaction mechanism. Toxicity of the monohalogenated HAMs (iodoacetamide, IAM; bromoacetamide, BAM; or chloroacetamide, CAM) varied depending on the halogen substituent. The aim of this research was to investigate how the halogen atom affects the reactivity and toxicological properties of HAMs, measured as induction of oxidative/electrophilic stress response and genotoxicity. Additionally, we wanted to determine how well in silico estimates of electrophilic softness matched thiol/thiolate reactivity and in vitro toxicological endpoints. Each of the HAMs significantly induced nuclear Rad51 accumulation and ARE signaling activity compared to a negative control. The rank order of effect was IAM>BAM>CAM for Rad51, and BAM≈IAM>CAM for ARE. In general, electrophilic softness and in chemico thiol/thiolate reactivity provided a qualitative indicator of toxicity, as the softer electrophiles IAM and BAM were more thiol/thiolate reactive and were more toxic than CAM. Copyright © 2017. Published by Elsevier B.V.
Lebda, Mohamed A; Tohamy, Hossam G; El-Sayed, Yasser S
2017-05-01
Dietary intake of fructose corn syrup in sweetened beverages is associated with the development of metabolic syndrome and obesity. We hypothesized that inflammatory cytokines play a role in lipid storage and induction of liver injury. Therefore, this study intended to explore the expression of adipocytokines and its link to hepatic damage. Rats were assigned to drink water, cola soft drink (free access) and aspartame (240 mg/kg body weight/day orally) for 2 months. The lipid profiles, liver antioxidants and pathology, and mRNA expression of adipogenic cytokines were evaluated. Subchronic intake of soft drink or aspartame substantially induced hyperglycemia and hypertriacylglycerolemia, as represented by increased serum glucose, triacylglycerol, low-density lipoprotein and very low-density lipoprotein cholesterol, with obvious visceral fatty deposition. These metabolic syndromes were associated with the up-regulation of leptin and down-regulation of adiponectin and peroxisome proliferator activated receptor-γ (PPAR-γ) expression. Moreover, alterations in serum transaminases accompanied by hepatic oxidative stress involving induction of malondialdehyde and reduction of superoxide dismutase, catalase, and glutathione peroxidase and glutathione levels are indicative of oxidative hepatic damage. Several cytoarchitecture alterations were detected in the liver, including degeneration, infiltration, necrosis, and fibrosis, predominantly with aspartame. These data suggest that long-term intake of soft drink or aspartame-induced hepatic damage may be mediated by the induction of hyperglycemia, lipid accumulation, and oxidative stress with the involvement of adipocytokines. Copyright © 2017 Elsevier Inc. All rights reserved.
A protocol for monitoring soft tissue motion under compression garments during drop landings.
Mills, Chris; Scurr, Joanna; Wood, Louise
2011-06-03
This study used a single-subject design to establish a valid and reliable protocol for monitoring soft tissue motion under compression garments during drop landings. One male participant performed six 40 cm drop landings onto a force platform, in three compression conditions (none, medium high). Five reflective markers placed on the thigh under the compression garment and five over the garment were filmed using two cameras (1000 Hz). Following manual digitisation, marker coordinates were reconstructed and their resultant displacements and maximum change in separation distance between skin and garment markers were calculated. To determine reliability of marker application, 35 markers were attached to the thigh over the high compression garment and filmed. Markers were then removed and re-applied on three occasions; marker separation and distance to thigh centre of gravity were calculated. Results showed similar ground reaction forces during landing trials. Significant reductions in the maximum change in separation distance between markers from no compression to high compression landings were reported. Typical errors in marker movement under and over the garment were 0.1mm in medium and high compression landings. Re-application of markers showed mean typical errors of 1mm in marker separation and <3mm relative to thigh centre of gravity. This paper presents a novel protocol that demonstrates sufficient sensitivity to detect reductions in soft tissue motion during landings in high compression garments compared to no compression. Additionally, markers placed under or over the garment demonstrate low variance in movement, and the protocol reports good reliability in marker re-application. Copyright © 2011 Elsevier Ltd. All rights reserved.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-09-01
To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.
Soft mechanical stimulation induces a defense response against Botrytis cinerea in strawberry.
Tomas-Grau, Rodrigo Hernán; Requena-Serra, Fernando José; Hael-Conrad, Verónica; Martínez-Zamora, Martín Gustavo; Guerrero-Molina, María Fernanda; Díaz-Ricci, Juan Carlos
2018-02-01
Genes associated with plant mechanical stimulation were found in strawberry genome. A soft mechanical stimulation (SMS) induces molecular and biochemical changes in strawberry plants, conferring protection against Botrytis cinerea. Plants have the capacity to induce a defense response after exposure to abiotic stresses acquiring resistance towards pathogens. It was reported that when leaves of Arabidopsis thaliana were wounded or treated with a soft mechanical stimulation (SMS), they could resist much better the attack of the fungal pathogen Botrytis cinerea, and this effect was accompanied by an oxidative burst and the expression of touch-inducible genes (TCH). However, no further work was carried out to better characterize the induced defense response. In this paper, we report that TCH genes were identified for first time in the genomes of the strawberry species Fragaria ananassa (e.g. FaTCH2, FaTCH3, FaTCH4 and FaCML39) and Fragaria vesca (e.g. FvTCH2, FvTCH3, FvTCH4 and FvCML39). Phylogenetic studies revealed that F. ananassa TCH genes exhibited high similarity with the orthologous of F. vesca and lower with A. thaliana ones. We also present evidence that after SMS treatment on strawberry leaves, plants activate a rapid oxidative burst, callose deposition, and the up-regulation of TCH genes as well as plant defense genes such as FaPR1, FaCHI2-2, FaCAT, FaACS1 and FaOGBG-5. The latter represents the first report showing that TCH- and defense-induced genes participate in SMS-induced resistance in plants, bringing a rational explanation why plants exposed to a SMS treatment acquired an enhance resistance toward B. cinerea.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Soft-food diet induces oxidative stress in the rat brain.
Yoshino, Fumihiko; Yoshida, Ayaka; Hori, Norio; Ono, Yumie; Kimoto, Katsuhiko; Onozuka, Minoru; Lee, Masaichi Chang-il
2012-02-02
Decreased dopamine (DA) release in the hippocampus may be caused by dysfunctional mastication, although the mechanisms involved remain unclear. The present study examined the effects of soft- and hard-food diets on oxidative stress in the brain, and the relationship between these effects and hippocampal DA levels. The present study showed that DA release in the hippocampus was decreased in rats fed a soft-food diet. Electron spin resonance studies using the nitroxyl spin probe 3-methoxycarbonyl-2,2,5,5-tetramethylpyrrolidine-1-oxyl directly demonstrated a high level of oxidative stress in the rat brain due to soft-food diet feeding. In addition, we confirmed that DA directly react with reactive oxygen species such as hydroxyl radical and superoxide. These observations suggest that soft-food diet feeding enhances oxidative stress, which leads to oxidation and a decrease in the release of DA in the hippocampus of rats. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Electro-Active Polymer Based Soft Tactile Interface for Wearable Devices.
Mun, Seongcheol; Yun, Sungryul; Nam, Saekwang; Park, Seung Koo; Park, Suntak; Park, Bong Je; Lim, Jeong Mook; Kyung, Ki-Uk
2018-01-01
This paper reports soft actuator based tactile stimulation interfaces applicable to wearable devices. The soft actuator is prepared by multi-layered accumulation of thin electro-active polymer (EAP) films. The multi-layered actuator is designed to produce electrically-induced convex protrusive deformation, which can be dynamically programmable for wide range of tactile stimuli. The maximum vertical protrusion is and the output force is up to 255 mN. The soft actuators are embedded into the fingertip part of a glove and front part of a forearm band, respectively. We have conducted two kinds of experiments with 15 subjects. Perceived magnitudes of actuator's protrusion and vibrotactile intensity were measured with frequency of 1 Hz and 191 Hz, respectively. Analysis of the user tests shows participants perceive variation of protrusion height at the finger pad and modulation of vibration intensity through the proposed soft actuator based tactile interface.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
Fais, Paolo; Viero, Alessia; Viel, Guido; Giordano, Renzo; Raniero, Dario; Kusstatscher, Stefano; Giraudo, Chiara; Cecchetto, Giovanni; Montisci, Massimo
2018-04-07
Necrotizing fasciitis (NF) is a life-threatening infection of soft tissues spreading along the fasciae to the surrounding musculature, subcutaneous fat and overlying skin areas that can rapidly lead to septic shock and death. Due to the pandemic increase of medical malpractice lawsuits, above all in Western countries, the forensic pathologist is frequently asked to investigate post-mortem cases of NF in order to determine the cause of death and to identify any related negligence and/or medical error. Herein, we review the medical literature dealing with cases of NF in a post-mortem setting, present a case series of seven NF fatalities and discuss the main ante-mortem and post-mortem diagnostic challenges of both clinical and forensic interests. In particular, we address the following issues: (1) origin of soft tissue infections, (2) micro-organisms involved, (3) time of progression of the infection to NF, (4) clinical and histological staging of NF and (5) pros and cons of clinical and laboratory scores, specific forensic issues related to the reconstruction of the ideal medical conduct and the evaluation of the causal value/link of any eventual medical error.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Soft-light overhead illumination systems improve laparoscopic task performance.
Takai, Akihiro; Takada, Yasutsugu; Motomura, Hideki; Teramukai, Satoshi
2014-02-01
The aim of this study was to evaluate the impact of attached shadow cues for laparoscopic task performance. We developed a soft-light overhead illumination system (SOIS) that produced attached shadows on objects. We compared results using the SOIS with those using a conventional illumination system with regard to laparoscopic experience and laparoscope-to-target distances (LTDs). Forty-two medical students and 23 surgeons participated in the study. A peg transfer task (LTD, 120 mm) for students and surgeons, and a suture removal task (LTD, 30 mm) for students were performed. Illumination systems were randomly assigned to each task. Endpoints were: total number of peg transfers; percentage of peg-dropping errors; and total execution time for suture removal. After the task, participants filled out a questionnaire on their preference for a particular illumination system. Total number of peg transfers was greater with the SOIS for both students and surgeons. Percentage of peg-dropping errors for surgeons was lower with the SOIS. Total execution time for suture removal was shorter with the SOIS. Forty-five participants (69% in total) evaluated the SOIS for easier task performance. The present results confirm that the SOIS improves laparoscopic task performance, regardless of previous laparoscopic experience or LTD.
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban; Jones, Christopher
2008-01-01
This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.
Non-Fourier based thermal-mechanical tissue damage prediction for thermal ablation.
Li, Xin; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2017-01-02
Prediction of tissue damage under thermal loads plays important role for thermal ablation planning. A new methodology is presented in this paper by combing non-Fourier bio-heat transfer, constitutive elastic mechanics as well as non-rigid motion of dynamics to predict and analyze thermal distribution, thermal-induced mechanical deformation and thermal-mechanical damage of soft tissues under thermal loads. Simulations and comparison analysis demonstrate that the proposed methodology based on the non-Fourier bio-heat transfer can account for the thermal-induced mechanical behaviors of soft tissues and predict tissue thermal damage more accurately than classical Fourier bio-heat transfer based model.
Non-Fourier based thermal-mechanical tissue damage prediction for thermal ablation
Li, Xin; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2017-01-01
ABSTRACT Prediction of tissue damage under thermal loads plays important role for thermal ablation planning. A new methodology is presented in this paper by combing non-Fourier bio-heat transfer, constitutive elastic mechanics as well as non-rigid motion of dynamics to predict and analyze thermal distribution, thermal-induced mechanical deformation and thermal-mechanical damage of soft tissues under thermal loads. Simulations and comparison analysis demonstrate that the proposed methodology based on the non-Fourier bio-heat transfer can account for the thermal-induced mechanical behaviors of soft tissues and predict tissue thermal damage more accurately than classical Fourier bio-heat transfer based model. PMID:27690290
NASA Astrophysics Data System (ADS)
Nakashima, Yoshito; Komatsubara, Junko
Unconsolidated soft sediments deform and mix complexly by seismically induced fluidization. Such geological soft-sediment deformation structures (SSDSs) recorded in boring cores were imaged by X-ray computed tomography (CT), which enables visualization of the inhomogeneous spatial distribution of iron-bearing mineral grains as strong X-ray absorbers in the deformed strata. Multifractal analysis was applied to the two-dimensional (2D) CT images with various degrees of deformation and mixing. The results show that the distribution of the iron-bearing mineral grains is multifractal for less deformed/mixed strata and almost monofractal for fully mixed (i.e. almost homogenized) strata. Computer simulations of deformation of real and synthetic digital images were performed using the egg-beater flow model. The simulations successfully reproduced the transformation from the multifractal spectra into almost monofractal spectra (i.e. almost convergence on a single point) with an increase in deformation/mixing intensity. The present study demonstrates that multifractal analysis coupled with X-ray CT and the mixing flow model is useful to quantify the complexity of seismically induced SSDSs, standing as a novel method for the evaluation of cores for seismic risk assessment.
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
NASA Astrophysics Data System (ADS)
Janek Strååt, Sara; Andreassen, Björn; Jonsson, Cathrine; Noz, Marilyn E.; Maguire, Gerald Q., Jr.; Näfstadius, Peder; Näslund, Ingemar; Schoenahl, Frederic; Brahme, Anders
2013-08-01
The purpose of this study was to investigate in vivo verification of radiation treatment with high energy photon beams using PET/CT to image the induced positron activity. The measurements of the positron activation induced in a preoperative rectal cancer patient and a prostate cancer patient following 50 MV photon treatments are presented. A total dose of 5 and 8 Gy, respectively, were delivered to the tumors. Imaging was performed with a 64-slice PET/CT scanner for 30 min, starting 7 min after the end of the treatment. The CT volume from the PET/CT and the treatment planning CT were coregistered by matching anatomical reference points in the patient. The treatment delivery was imaged in vivo based on the distribution of the induced positron emitters produced by photonuclear reactions in tissue mapped on to the associated dose distribution of the treatment plan. The results showed that spatial distribution of induced activity in both patients agreed well with the delivered beam portals of the treatment plans in the entrance subcutaneous fat regions but less so in blood and oxygen rich soft tissues. For the preoperative rectal cancer patient however, a 2 ± (0.5) cm misalignment was observed in the cranial-caudal direction of the patient between the induced activity distribution and treatment plan, indicating a beam patient setup error. No misalignment of this kind was seen in the prostate cancer patient. However, due to a fast patient setup error in the PET/CT scanner a slight mis-position of the patient in the PET/CT was observed in all three planes, resulting in a deformed activity distribution compared to the treatment plan. The present study indicates that the induced positron emitters by high energy photon beams can be measured quite accurately using PET imaging of subcutaneous fat to allow portal verification of the delivered treatment beams. Measurement of the induced activity in the patient 7 min after receiving 5 Gy involved count rates which were about 20 times lower than that of a patient undergoing standard 18F-FDG treatment. When using a combination of short lived nuclides such as 15O (half-life: 2 min) and 11C (half-life: 20 min) with low activity it is not optimal to use clinical reconstruction protocols. Thus, it might be desirable to further optimize reconstruction parameters as well as to address hardware improvements in realizing in vivo treatment verification with PET/CT in the future. A significant improvement with regard to 15O imaging could also be expected by having the PET/CT unit located close to the radiation treatment room.
[Relations between health information systems and patient safety].
Nøhr, Christian
2012-11-05
Health information systems have the potential to reduce medical errors, and indeed many studies have shown a significant reduction. However, if the systems are not designed and implemented properly, there is evidence that suggest that new types of errors will arise--i.e., technology-induced errors. Health information systems will need to undergo a more rigorous evaluation. Usability evaluation and simulation test with humans in the loop can help to detect and prevent technology-induced errors before they are deployed in real health-care settings.
Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L
2010-08-23
We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.
Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.
2010-01-01
We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237
Shear wave propagation in anisotropic soft tissues and gels
Namani, Ravi; Bayly, Philip V.
2013-01-01
The propagation of shear waves in soft tissue can be visualized by magnetic resonance elastography (MRE) [1] to characterize tissue mechanical properties. Dynamic deformation of brain tissue arising from shear wave propagation may underlie the pathology of blast-induced traumatic brain injury. White matter in the brain, like other biological materials, exhibits a transversely isotropic structure, due to the arrangement of parallel fibers. Appropriate mathematical models and well-characterized experimental systems are needed to understand wave propagation in these structures. In this paper we review the theory behind waves in anisotropic, soft materials, including small-amplitude waves superimposed on finite deformation of a nonlinear hyperelastic material. Some predictions of this theory are confirmed in experimental studies of a soft material with controlled anisotropy: magnetically-aligned fibrin gel. PMID:19963987
Soft drinks and body weight development in childhood: is there a relationship?
Libuda, Lars; Kersting, Mathilde
2009-11-01
The high sugar content of regular soft drinks brought up discussions on their influence on energy balance and body weight especially in childhood and adolescence. This review examines the evidence for a causal relationship between soft drink consumption and excess weight gain in childhood and identifies potential underlying mechanisms. Although results from cohort studies contrary to those from intervention studies are not univocal, there is evidence for a detrimental effect of soft drink consumption on body weight in childhood. This impact seems to be induced by an inadequate energy compensation after the consumption of sugar-containing beverages. Because of the similar composition of high fructose corn syrup (HFCS) and sucrose, it is implausible that these types of sugar in soft drinks can cause substantially different effects on body weight. The replacement of soft drinks and other sugar-containing beverages such as fruit juices by noncaloric alternatives seems to be a promising approach for the prevention of overweight in childhood and adolescence. However, as the cause of overweight and obesity is multifactorial, the limitation of soft drink consumption needs to be incorporated in a complex strategy for obesity prevention.
Soft lubrication: The elastohydrodynamics of nonconforming and conforming contacts
NASA Astrophysics Data System (ADS)
Skotheim, J. M.; Mahadevan, L.
2005-09-01
We study the lubrication of fluid-immersed soft interfaces and show that elastic deformation couples tangential and normal forces and thus generates lift. We consider materials that deform easily, due to either geometry (e.g., a shell) or constitutive properties (e.g., a gel or a rubber), so that the effects of pressure and temperature on the fluid properties may be neglected. Four different system geometries are considered: a rigid cylinder moving parallel to a soft layer coating a rigid substrate; a soft cylinder moving parallel to a rigid substrate; a cylindrical shell moving parallel to a rigid substrate; and finally a cylindrical conforming journal bearing coated with a thin soft layer. In addition, for the particular case of a soft layer coating a rigid substrate, we consider both elastic and poroelastic material responses. For all these cases, we find the same generic behavior: there is an optimal combination of geometric and material parameters that maximizes the dimensionless normal force as a function of the softness parameter η =hydrodynamicpressure/elasticstiffness=surfacedeflection/gapthickness, which characterizes the fluid-induced deformation of the interface. The corresponding cases for a spherical slider are treated using scaling concepts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuratomi, Y.; Ono, M.; Yasutake, C.
1987-01-01
A mutant clone (MO-5) was originally isolated as a clone resistant to Na/sup +//K/sup +/ ionophoric antibiotic monensin from mouse Balb/c3T3 cells. MO-5 was found to show low receptor-endocytosis activity for epidermal growth factor (EGF):binding activity for EGF in MO-5 was less than one tenth of that in Balb/c3T3. Anchorage-independent growth of MO-5 was compared to that of Balb/c3T3 when assayed by colony formation capacity in soft agar. Coadministration of EGF and TGF-..beta.. efficiently enhanced anchorage-independent growth of normal rat kidney (NRK) cells, but neither factor alone was competent to promote the anchorage-independent growth. The frequency of colonies appearing inmore » soft agar of MO-5 or Balb/c3T3 was significantly enhanced by TGF-..beta.. while EGF did not further enhance that of MO-5 or Balb/c3T3. Colonies of Balb/c3T3 formed in soft agar in the presence of TGF-..beta.. showed low colony formation capacity in soft agar in the absence of TGF-..beta... Colonies of MO-5 formed by TGF-..beta.. in soft agar, however, showed high colony formation capacity in soft agar in the absence of TGF-..beta... Pretreatment of MO-5 with TGF-..beta.. induced secretion of TGF-..beta..-like activity from the cells, while the treatment of Balb/c3T3 did not induce the secretion of a significant amount of TGF-..beta..-like activity. The loss of EGF-receptor activity in the stable expression and maintenance of the transformed phenotype in MO-5 is discussed.« less
NASA Astrophysics Data System (ADS)
Wang, Yu; Sun, Qingyang; Xiao, Jianliang
2018-02-01
Highly organized hierarchical surface morphologies possess various intriguing properties that could find important potential applications. In this paper, we demonstrate a facile approach to simultaneously form multiscale hierarchical surface morphologies through sequential wrinkling. This method combines surface wrinkling induced by thermal expansion and mechanical strain on a three-layer structure composed of an aluminum film, a hard Polydimethylsiloxane (PDMS) film, and a soft PDMS substrate. Deposition of the aluminum film on hard PDMS induces biaxial wrinkling due to thermal expansion mismatch, and recovering the prestrain in the soft PDMS substrate leads to wrinkling of the hard PDMS film. In total, three orders of wrinkling patterns form in this process, with wavelength and amplitude spanning 3 orders of magnitude in length scale. By increasing the prestrain in the soft PDMS substrate, a hierarchical wrinkling-folding structure was also obtained. This approach can be easily extended to other thin films for fabrication of multiscale hierarchical surface morphologies with potential applications in different areas.
Buckligami: Actuation of soft structures through mechanical instabilities
NASA Astrophysics Data System (ADS)
Lazarus, Arnaud; Reis, Pedro
2013-03-01
We present a novel mechanism for actuating soft structures, that is triggered through buckling. Our elastomeric samples are rapid-prototyped using digital fabrication and comprise of a cylindrical shell patterned with an array of voids, each of which is covered by a thin membrane. Decreasing the internal pressure of the structure induces local buckling of the ligaments of the pattern, resulting in controllable folding of the global structure. Using rigid inclusions to plug the voids in specific geometric arrangements allows us to excite a variety of different fundamental motions of the cylindrical shell, including flexure and twist. We refer to this new mechanism of buckling-induced folding as ``buckligami.'' Given that geometry, elasticity and buckling are the underlying ingredients of this local folding mechanism, the global actuation is scalable, reversible and repeatable. Characterization and rationalization of our experiments provide crucial fundamental understanding to aid the design of new scale-independent actuators, with potential implications in the field of soft robotics.
Groomed jets in heavy-ion collisions: sensitivity to medium-induced bremsstrahlung
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehtar-Tani, Yacine; Tywoniuk, Konrad
Here, we argue that contemporary jet substructure techniques might facilitate a more direct measurement of hard medium-induced gluon bremsstrahlung in heavy-ion collisions, and focus specifically on the “soft drop declustering” procedure that singles out the two leading jet substructures. Assuming coherent jet energy loss, we find an enhancement of the distribution of the energy fractions shared by the two substructures at small subjet energy caused by hard medium-induced gluon radiation. Departures from this approximation are discussed, in particular, the effects of colour decoherence and the contamination of the grooming procedure by soft background. Finally, we propose a complementary observable, thatmore » is the ratio of the two-pronged probability in Pb-Pb to proton-proton collisions and discuss its sensitivity to various energy loss mechanisms.« less
Groomed jets in heavy-ion collisions: sensitivity to medium-induced bremsstrahlung
Mehtar-Tani, Yacine; Tywoniuk, Konrad
2017-04-21
Here, we argue that contemporary jet substructure techniques might facilitate a more direct measurement of hard medium-induced gluon bremsstrahlung in heavy-ion collisions, and focus specifically on the “soft drop declustering” procedure that singles out the two leading jet substructures. Assuming coherent jet energy loss, we find an enhancement of the distribution of the energy fractions shared by the two substructures at small subjet energy caused by hard medium-induced gluon radiation. Departures from this approximation are discussed, in particular, the effects of colour decoherence and the contamination of the grooming procedure by soft background. Finally, we propose a complementary observable, thatmore » is the ratio of the two-pronged probability in Pb-Pb to proton-proton collisions and discuss its sensitivity to various energy loss mechanisms.« less
Schrider, Daniel R.; Mendes, Fábio K.; Hahn, Matthew W.; Kern, Andrew D.
2015-01-01
Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This general problem of “soft shoulders” underscores the difficulty in differentiating soft and partial sweeps from hard-sweep scenarios in molecular population genomics data. The soft-shoulder effect also implies that the more common hard sweeps have been in recent evolutionary history, the more prevalent spurious signatures of soft or partial sweeps may appear in some genome-wide scans. PMID:25716978
Schrider, Daniel R; Mendes, Fábio K; Hahn, Matthew W; Kern, Andrew D
2015-05-01
Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This general problem of "soft shoulders" underscores the difficulty in differentiating soft and partial sweeps from hard-sweep scenarios in molecular population genomics data. The soft-shoulder effect also implies that the more common hard sweeps have been in recent evolutionary history, the more prevalent spurious signatures of soft or partial sweeps may appear in some genome-wide scans. Copyright © 2015 by the Genetics Society of America.
Soft tissue wound healing around teeth and dental implants.
Sculean, Anton; Gruber, Reinhard; Bosshardt, Dieter D
2014-04-01
To provide an overview on the biology and soft tissue wound healing around teeth and dental implants. This narrative review focuses on cell biology and histology of soft tissue wounds around natural teeth and dental implants. The available data indicate that: (a) Oral wounds follow a similar pattern. (b) The tissue specificities of the gingival, alveolar and palatal mucosa appear to be innately and not necessarily functionally determined. (c) The granulation tissue originating from the periodontal ligament or from connective tissue originally covered by keratinized epithelium has the potential to induce keratinization. However, it also appears that deep palatal connective tissue may not have the same potential to induce keratinization as the palatal connective tissue originating from an immediately subepithelial area. (d) Epithelial healing following non-surgical and surgical periodontal therapy appears to be completed after a period of 7–14 days. Structural integrity of a maturing wound between a denuded root surface and a soft tissue flap is achieved at approximately 14-days post-surgery. (e) The formation of the biological width and maturation of the barrier function around transmucosal implants requires 6–8 weeks of healing. (f) The established peri-implant soft connective tissue resembles a scar tissue in composition, fibre orientation, and vasculature. (g) The peri-implant junctional epithelium may reach a greater final length under certain conditions such as implants placed into fresh extraction sockets versus conventional implant procedures in healed sites. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Bezrukov, Ilja; Schmidt, Holger; Mantlik, Frédéric; Schwenzer, Nina; Brendle, Cornelia; Schölkopf, Bernhard; Pichler, Bernd J
2013-10-01
Hybrid PET/MR systems have recently entered clinical practice. Thus, the accuracy of MR-based attenuation correction in simultaneously acquired data can now be investigated. We assessed the accuracy of 4 methods of MR-based attenuation correction in lesions within soft tissue, bone, and MR susceptibility artifacts: 2 segmentation-based methods (SEG1, provided by the manufacturer, and SEG2, a method with atlas-based susceptibility artifact correction); an atlas- and pattern recognition-based method (AT&PR), which also used artifact correction; and a new method combining AT&PR and SEG2 (SEG2wBONE). Attenuation maps were calculated for the PET/MR datasets of 10 patients acquired on a whole-body PET/MR system, allowing for simultaneous acquisition of PET and MR data. Eighty percent iso-contour volumes of interest were placed on lesions in soft tissue (n = 21), in bone (n = 20), near bone (n = 19), and within or near MR susceptibility artifacts (n = 9). Relative mean volume-of-interest differences were calculated with CT-based attenuation correction as a reference. For soft-tissue lesions, none of the methods revealed a significant difference in PET standardized uptake value relative to CT-based attenuation correction (SEG1, -2.6% ± 5.8%; SEG2, -1.6% ± 4.9%; AT&PR, -4.7% ± 6.5%; SEG2wBONE, 0.2% ± 5.3%). For bone lesions, underestimation of PET standardized uptake values was found for all methods, with minimized error for the atlas-based approaches (SEG1, -16.1% ± 9.7%; SEG2, -11.0% ± 6.7%; AT&PR, -6.6% ± 5.0%; SEG2wBONE, -4.7% ± 4.4%). For lesions near bone, underestimations of lower magnitude were observed (SEG1, -12.0% ± 7.4%; SEG2, -9.2% ± 6.5%; AT&PR, -4.6% ± 7.8%; SEG2wBONE, -4.2% ± 6.2%). For lesions affected by MR susceptibility artifacts, quantification errors could be reduced using the atlas-based artifact correction (SEG1, -54.0% ± 38.4%; SEG2, -15.0% ± 12.2%; AT&PR, -4.1% ± 11.2%; SEG2wBONE, 0.6% ± 11.1%). For soft-tissue lesions, none of the evaluated methods showed statistically significant errors. For bone lesions, significant underestimations of -16% and -11% occurred for methods in which bone tissue was ignored (SEG1 and SEG2). In the present attenuation correction schemes, uncorrected MR susceptibility artifacts typically result in reduced attenuation values, potentially leading to highly reduced PET standardized uptake values, rendering lesions indistinguishable from background. While AT&PR and SEG2wBONE show accurate results in both soft tissue and bone, SEG2wBONE uses a two-step approach for tissue classification, which increases the robustness of prediction and can be applied retrospectively if more precision in bone areas is needed.
Chemical Analysis of the Moon at the Surveyor VI Landing Site: Preliminary Results.
Turkevich, A L; Patterson, J H; Franzgrote, E J
1968-06-07
The alpha-scattering experiment aboard soft-landing Surveyor VI has provided a chemical analysis of the surface of the moon in Sinus Medii. The preliminary results indicate that, within experimental errors, the composition is the same as that found by Surveyor V in Mare Tranquillitatis. This finding suggests that large portions of the lunar maria resemble basalt in composition.
Single event upset in avionics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taber, A.; Normand, E.
1993-04-01
Data from military/experimental flights and laboratory testing indicate that typical non radiation-hardened 64K and 256K static random access memories (SRAMs) can experience a significant soft upset rate at aircraft altitudes due to energetic neutrons created by cosmic ray interactions in the atmosphere. It is suggested that error detection and correction (EDAC) circuitry be considered for all avionics designs containing large amounts of semi-conductor memory.
Sarrafpour, Babak; Swain, Michael; Li, Qing; Zoellner, Hans
2013-01-01
Intermittent tongue, lip and cheek forces influence precise tooth position, so we here examine the possibility that tissue remodelling driven by functional bite-force-induced jaw-strain accounts for tooth eruption. Notably, although a separate true ‘eruptive force’ is widely assumed, there is little direct evidence for such a force. We constructed a three dimensional finite element model from axial computerized tomography of an 8 year old child mandible containing 12 erupted and 8 unerupted teeth. Tissues modelled included: cortical bone, cancellous bone, soft tissue dental follicle, periodontal ligament, enamel, dentine, pulp and articular cartilage. Strain and hydrostatic stress during incisive and unilateral molar bite force were modelled, with force applied via medial and lateral pterygoid, temporalis, masseter and digastric muscles. Strain was maximal in the soft tissue follicle as opposed to surrounding bone, consistent with follicle as an effective mechanosensor. Initial numerical analysis of dental follicle soft tissue overlying crowns and beneath the roots of unerupted teeth was of volume and hydrostatic stress. To numerically evaluate biological significance of differing hydrostatic stress levels normalized for variable finite element volume, ‘biological response units’ in Nmm were defined and calculated by multiplication of hydrostatic stress and volume for each finite element. Graphical representations revealed similar overall responses for individual teeth regardless if incisive or right molar bite force was studied. There was general compression in the soft tissues over crowns of most unerupted teeth, and general tension in the soft tissues beneath roots. Not conforming to this pattern were the unerupted second molars, which do not erupt at this developmental stage. Data support a new hypothesis for tooth eruption, in which the follicular soft tissues detect bite-force-induced bone-strain, and direct bone remodelling at the inner surface of the surrounding bony crypt, with the effect of enabling tooth eruption into the mouth. PMID:23554928
Sarrafpour, Babak; Swain, Michael; Li, Qing; Zoellner, Hans
2013-01-01
Intermittent tongue, lip and cheek forces influence precise tooth position, so we here examine the possibility that tissue remodelling driven by functional bite-force-induced jaw-strain accounts for tooth eruption. Notably, although a separate true 'eruptive force' is widely assumed, there is little direct evidence for such a force. We constructed a three dimensional finite element model from axial computerized tomography of an 8 year old child mandible containing 12 erupted and 8 unerupted teeth. Tissues modelled included: cortical bone, cancellous bone, soft tissue dental follicle, periodontal ligament, enamel, dentine, pulp and articular cartilage. Strain and hydrostatic stress during incisive and unilateral molar bite force were modelled, with force applied via medial and lateral pterygoid, temporalis, masseter and digastric muscles. Strain was maximal in the soft tissue follicle as opposed to surrounding bone, consistent with follicle as an effective mechanosensor. Initial numerical analysis of dental follicle soft tissue overlying crowns and beneath the roots of unerupted teeth was of volume and hydrostatic stress. To numerically evaluate biological significance of differing hydrostatic stress levels normalized for variable finite element volume, 'biological response units' in Nmm were defined and calculated by multiplication of hydrostatic stress and volume for each finite element. Graphical representations revealed similar overall responses for individual teeth regardless if incisive or right molar bite force was studied. There was general compression in the soft tissues over crowns of most unerupted teeth, and general tension in the soft tissues beneath roots. Not conforming to this pattern were the unerupted second molars, which do not erupt at this developmental stage. Data support a new hypothesis for tooth eruption, in which the follicular soft tissues detect bite-force-induced bone-strain, and direct bone remodelling at the inner surface of the surrounding bony crypt, with the effect of enabling tooth eruption into the mouth.
Unreliable numbers: error and harm induced by bad design can be reduced by better design
Thimbleby, Harold; Oladimeji, Patrick; Cairns, Paul
2015-01-01
Number entry is a ubiquitous activity and is often performed in safety- and mission-critical procedures, such as healthcare, science, finance, aviation and in many other areas. We show that Monte Carlo methods can quickly and easily compare the reliability of different number entry systems. A surprising finding is that many common, widely used systems are defective, and induce unnecessary human error. We show that Monte Carlo methods enable designers to explore the implications of normal and unexpected operator behaviour, and to design systems to be more resilient to use error. We demonstrate novel designs with improved resilience, implying that the common problems identified and the errors they induce are avoidable. PMID:26354830
Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.
Van, Anh T; Hernando, Diego; Sutton, Bradley P
2011-11-01
A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shendruk, Tyler N., E-mail: tyler.shendruk@physics.ox.ac.uk; Bertrand, Martin; Harden, James L.
2014-12-28
Given the ubiquity of depletion effects in biological and other soft matter systems, it is desirable to have coarse-grained Molecular Dynamics (MD) simulation approaches appropriate for the study of complex systems. This paper examines the use of two common truncated Lennard-Jones (Weeks-Chandler-Andersen (WCA)) potentials to describe a pair of colloidal particles in a thermal bath of depletants. The shifted-WCA model is the steeper of the two repulsive potentials considered, while the combinatorial-WCA model is the softer. It is found that the depletion-induced well depth for the combinatorial-WCA model is significantly deeper than the shifted-WCA model because the resulting overlap ofmore » the colloids yields extra accessible volume for depletants. For both shifted- and combinatorial-WCA simulations, the second virial coefficients and pair potentials between colloids are demonstrated to be well approximated by the Morphometric Thermodynamics (MT) model. This agreement suggests that the presence of depletants can be accurately modelled in MD simulations by implicitly including them through simple, analytical MT forms for depletion-induced interactions. Although both WCA potentials are found to be effective generic coarse-grained simulation approaches for studying depletion effects in complicated soft matter systems, combinatorial-WCA is the more efficient approach as depletion effects are enhanced at lower depletant densities. The findings indicate that for soft matter systems that are better modelled by potentials with some compressibility, predictions from hard-sphere systems could greatly underestimate the magnitude of depletion effects at a given depletant density.« less
The high accuracy data processing system of laser interferometry signals based on MSP430
NASA Astrophysics Data System (ADS)
Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong
2009-07-01
Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.
An engineer's view on genetic information and biological evolution.
Battail, Gérard
2004-01-01
We develop ideas on genome replication introduced in Battail [Europhys. Lett. 40 (1997) 343]. Starting with the hypothesis that the genome replication process uses error-correcting means, and the auxiliary one that nested codes are used to this end, we first review the concepts of redundancy and error-correcting codes. Then we show that these hypotheses imply that: distinct species exist with a hierarchical taxonomy, there is a trend of evolution towards complexity, and evolution proceeds by discrete jumps. At least the first two features above may be considered as biological facts so, in the absence of direct evidence, they provide an indirect proof in favour of the hypothesized error-correction system. The very high redundancy of genomes makes it possible. In order to explain how it is implemented, we suggest that soft codes and replication decoding, to be briefly described, are plausible candidates. Experimentally proven properties of long-range correlation of the DNA message substantiate this claim.
Computing in the presence of soft bit errors. [caused by single event upset on spacecraft
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.
1984-01-01
It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.
Random access to mobile networks with advanced error correction
NASA Technical Reports Server (NTRS)
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
NASA Astrophysics Data System (ADS)
Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.
2013-09-01
This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.
Dell'Angela, M.; Anniyev, T.; Beye, M.; ...
2015-03-01
Vacuum space charge-induced kinetic energy shifts of O 1s and Ru 3d core levels in femtosecond soft X-ray photoemission spectra (PES) have been studied at a free electron laser (FEL) for an oxygen layer on Ru(0001). We fully reproduced the measurements by simulating the in-vacuum expansion of the photoelectrons and demonstrate the space charge contribution of the high-order harmonics in the FEL beam. Employing the same analysis for 400 nm pump-X-ray probe PES, we can disentangle the delay dependent Ru 3d energy shifts into effects induced by space charge and by lattice heating from the femtosecond pump pulse.
NASA Astrophysics Data System (ADS)
Gladkov, A. S.; Lobova, E. U.; Deev, E. V.; Korzhenkov, A. M.; Mazeika, J. V.; Abdieva, S. V.; Rogozhin, E. A.; Rodkin, M. V.; Fortuna, A. B.; Charimov, T. A.; Yudakhin, A. S.
2016-10-01
This paper discusses the composition and distribution of soft-sediment deformation structures induced by liquefaction in Late Pleistocene lacustrine terrace deposits on the southern shore of Issyk-Kul Lake in the northern Tien Shan mountains of Kyrgyzstan. The section contains seven deformed beds grouped in two intervals. Five deformed beds in the upper interval contain load structures (load casts and flame structures), convolute lamination, ball-and-pillow structures, folds and slumps. Deformation patterns indicate that a seismic trigger generated a multiple slump on a gentle slope. The dating of overlying subaerial deposits suggests correlation between the deformation features and strong earthquakes in the Late Pleistocene.
Dell'Angela, M; Anniyev, T; Beye, M; Coffee, R; Föhlisch, A; Gladh, J; Kaya, S; Katayama, T; Krupin, O; Nilsson, A; Nordlund, D; Schlotter, W F; Sellberg, J A; Sorgenfrei, F; Turner, J J; Öström, H; Ogasawara, H; Wolf, M; Wurth, W
2015-03-01
Vacuum space charge induced kinetic energy shifts of O 1s and Ru 3d core levels in femtosecond soft X-ray photoemission spectra (PES) have been studied at a free electron laser (FEL) for an oxygen layer on Ru(0001). We fully reproduced the measurements by simulating the in-vacuum expansion of the photoelectrons and demonstrate the space charge contribution of the high-order harmonics in the FEL beam. Employing the same analysis for 400 nm pump-X-ray probe PES, we can disentangle the delay dependent Ru 3d energy shifts into effects induced by space charge and by lattice heating from the femtosecond pump pulse.
Effects of jet-induced medium excitation in γ-hadron correlation in A+A collisions
NASA Astrophysics Data System (ADS)
Chen, Wei; Cao, Shanshan; Luo, Tan; Pang, Long-Gang; Wang, Xin-Nian
2018-02-01
Coupled Linear Boltzmann Transport and hydrodynamics (CoLBT-hydro) is developed for co-current and event-by-event simulations of jet transport and jet-induced medium excitation (j.i.m.e.) in high-energy heavy-ion collisions. This is made possible by a GPU parallelized (3 + 1)D hydrodynamics that has a source term from the energy-momentum deposition by propagating jet shower partons and provides real time update of the bulk medium evolution for subsequent jet transport. Hadron spectra in γ-jet events of A+A collisions at RHIC and LHC are calculated for the first time that include hadrons from both the modified jet and j.i.m.e. CoLBT-hydro describes well experimental data at RHIC on the suppression of leading hadrons due to parton energy loss. It also predicts the enhancement of soft hadrons from j.i.m.e. The onset of soft hadron enhancement occurs at a constant transverse momentum due to the thermal nature of soft hadrons from j.i.m.e. which also have a significantly broadened azimuthal distribution relative to the jet direction. Soft hadrons in the γ direction are, on the other hand, depleted due to a diffusion wake behind the jet.
Effects of jet-induced medium excitation in γ-hadron correlation in A+A collisions
Chen, Wei; Cao, Shanshan; Luo, Tan; ...
2017-12-07
Coupled Linear Boltzmann Transport and hydrodynamics (CoLBT-hydro) is developed for co-current and event-by-event simulations of jet transport and jet-induced medium excitation (j.i.m.e.) in high-energy heavy-ion collisions. This is made possible by a GPU parallelized (3+1)D hydrodynamics that has a source term from the energy-momentum deposition by propagating jet shower partons and provides real time update of the bulk medium evolution for subsequent jet transport. Hadron spectra in γ-jet events of A+A collisions at RHIC and LHC are calculated for the first time that include hadrons from both the modified jet and j.i.m.e. CoLBT-hydro describes well experimental data at RHIC onmore » the suppression of leading hadrons due to parton energy loss. It also predicts the enhancement of soft hadrons from j.i.m.e. The onset of soft hadron enhancement occurs at a constant transverse momentum due to the thermal nature of soft hadrons from j.i.m.e. which also have a significantly broadened azimuthal distribution relative to the jet direction. Soft hadrons in the γ direction are, on the other hand, depleted due to a diffusion wake behind the jet.« less
THE EFFECT OF NUCLEAR EXPLOSIONS ON COMMERCIALLY PACKAGED BEVERAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConnell, E.R.; Sampson, G.O.; Sharf, J.M.
Representative commerciaily packaged beverages, such as soft drinks and beer, in glass bottles and metal cans were exposed to the radiation from nuclear explosions. Preliminary experimental resulthe were obtained from test layouts exposed to a detonation of approximately nominal yield. Extensive test layouts were subsequently exposed during Operation Cue, of 50% greater than nominal yield, at varying distances from Ground Zero. These commerically packaged soft drinks and beer in giass botties or metal cans survived the blast overpressures even as close as 1270 ft from Ground Zero, and at more remote distances, with most failures being caused by flying missiles,more » crushing by surrounding structures, or dislodgment from shelves. Induced radioactivity, subsequently measured on representative samples, was not great in either soft drinks or beer, even at the forward positions, and these beverages could be used as potable water sources for immediate emergency purposes as soon as the storage area ms safe to enter after a nuclear explosion. Although containers showed some induced radioactivity, none of this activity was transferred to the contents. Some flavor change was found in the beverages by taste panels, more in beer than in soft drinks, but was insufficient to detract from their potential usage as emergency supplies of potable water. (auth)« less
Effects of jet-induced medium excitation in γ-hadron correlation in A+A collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei; Cao, Shanshan; Luo, Tan
Coupled Linear Boltzmann Transport and hydrodynamics (CoLBT-hydro) is developed for co-current and event-by-event simulations of jet transport and jet-induced medium excitation (j.i.m.e.) in high-energy heavy-ion collisions. This is made possible by a GPU parallelized (3+1)D hydrodynamics that has a source term from the energy-momentum deposition by propagating jet shower partons and provides real time update of the bulk medium evolution for subsequent jet transport. Hadron spectra in γ-jet events of A+A collisions at RHIC and LHC are calculated for the first time that include hadrons from both the modified jet and j.i.m.e. CoLBT-hydro describes well experimental data at RHIC onmore » the suppression of leading hadrons due to parton energy loss. It also predicts the enhancement of soft hadrons from j.i.m.e. The onset of soft hadron enhancement occurs at a constant transverse momentum due to the thermal nature of soft hadrons from j.i.m.e. which also have a significantly broadened azimuthal distribution relative to the jet direction. Soft hadrons in the γ direction are, on the other hand, depleted due to a diffusion wake behind the jet.« less
NASA Astrophysics Data System (ADS)
Lee, Kang Il
2018-06-01
The present study aims to predict the temperature rise induced by high intensity focused ultrasound (HIFU) in soft tissues to assess tissue damage during HIFU thermal therapies. With the help of a MATLAB-based software package developed for HIFU simulation, the HIFU field was simulated by solving the axisymmetric Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation from the frequency-domain perspective, and the HIFU-induced temperature rise in a tissue-mimicking phantom was simulated by solving Pennes' bioheat transfer (BHT) equation. In order to verify the simulation results, we performed in-vitro heating experiments on a tissue-mimicking phantom by using a 1.1-MHz, single-element, spherically focused HIFU transducer. The temperature rise near the focal spot obtained from the HIFU simulator was in good agreement with that from the in-vitro experiments. This confirms that the HIFU simulator based on the KZK and the BHT equations captures the HIFU-induced temperature rise in soft tissues well enough to make it suitable for HIFU treatment planning.
Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Moorthy, H. T.
1997-01-01
This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Visuomotor adaptation needs a validation of prediction error by feedback error
Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle
2014-01-01
The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
Wrinkling pattern evolution of cylindrical biological tissues with differential growth.
Jia, Fei; Li, Bo; Cao, Yan-Ping; Xie, Wei-Hua; Feng, Xi-Qiao
2015-01-01
Three-dimensional surface wrinkling of soft cylindrical tissues induced by differential growth is explored. Differential volumetric growth can cause their morphological stability, leading to the formation of hexagonal and labyrinth wrinkles. During postbuckling, multiple bifurcations and morphological transitions may occur as a consequence of continuous growth in the surface layer. The physical mechanisms underpinning the morphological evolution are examined from the viewpoint of energy. Surface curvature is found to play a regulatory role in the pattern evolution. This study may not only help understand the morphogenesis of soft biological tissues, but also inspire novel routes for creating desired surface patterns of soft materials.
Djordjevic, Ivan B; Vasic, Bane
2006-05-29
A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.
Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Hashizume, Takuya; Kobayashi, Ikuo
2016-07-01
Our aim in this study is to derive an identification limit on a dosimeter for not disturbing a medical image when patients wear a small-type optically stimulated luminescence (OSL) dosimeter on their bodies during X-ray diagnostic imaging. For evaluation of the detection limit based on an analysis of X-ray spectra, we propose a new quantitative identification method. We performed experiments for which we used diagnostic X-ray equipment, a soft-tissue-equivalent phantom (1-20 cm), and a CdTe X-ray spectrometer assuming one pixel of the X-ray imaging detector. Then, with the following two experimental settings, corresponding X-ray spectra were measured with 40-120 kVp and 0.5-1000 mAs at a source-to-detector distance of 100 cm: (1) X-rays penetrating a soft-tissue-equivalent phantom with the OSL dosimeter attached directly on the phantom, and (2) X-rays penetrating only the soft-tissue-equivalent phantom. Next, the energy fluence and errors in the fluence were calculated from the spectra. When the energy fluence with errors concerning these two experimental conditions was estimated to be indistinctive, we defined the condition as the OSL dosimeter not being identified on the X-ray image. Based on our analysis, we determined the identification limit of the dosimeter. We then compared our results with those for the general irradiation conditions used in clinics. We found that the OSL dosimeter could not be identified under the irradiation conditions of abdominal and chest radiography, namely, one can apply the OSL dosimeter to measurement of the exposure dose in the irradiation field of X-rays without disturbing medical images.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-01-01
Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146
Vinnicombe, S J; Whelehan, P; Thomson, K; McLean, D; Purdie, C A; Jordan, L B; Hubbard, S; Evans, A J
2014-04-01
Shear wave elastography (SWE) is a promising adjunct to greyscale ultrasound in differentiating benign from malignant breast masses. The purpose of this study was to characterise breast cancers which are not stiff on quantitative SWE, to elucidate potential sources of error in clinical application of SWE to evaluation of breast masses. Three hundred and two consecutive patients examined by SWE who underwent immediate surgery for breast cancer were included. Characteristics of 280 lesions with suspicious SWE values (mean stiffness >50 kPa) were compared with 22 lesions with benign SWE values (<50 kPa). Statistical significance of the differences was assessed using non-parametric goodness-of-fit tests. Pure ductal carcinoma in situ (DCIS) masses were more often soft on SWE than masses representing invasive breast cancer. Invasive cancers that were soft were more frequently: histological grade 1, tubular subtype, ≤10 mm invasive size and detected at screening mammography. No significant differences were found with respect to the presence of invasive lobular cancer, vascular invasion, hormone and HER-2 receptor status. Lymph node positivity was less common in soft cancers. Malignant breast masses classified as benign by quantitative SWE tend to have better prognostic features than those correctly classified as malignant. • Over 90 % of cancers assessable with ultrasound have a mean stiffness >50 kPa. • 'Soft' invasive cancers are frequently small (≤10 mm), low grade and screen-detected. • Pure DCIS masses are more often soft than invasive cancers (>40 %). • Large symptomatic masses are better evaluated with SWE than small clinically occult lesions. • When assessing small lesions, 'softness' should not raise the threshold for biopsy.
Rodriguez, María J.; Brown, Joseph; Giordano, Jodie; Lin, Samuel J.; Omenetto, Fiorenzo G.; Kaplan, David L.
2016-01-01
In the field of soft tissue reconstruction, custom implants could address the need for materials that can fill complex geometries. Our aim was to develop a material system with optimal rheology for material extrusion, that can be processed in physiological and non-toxic conditions and provide structural support for soft tissue reconstruction. To meet this need we developed silk based bioinks using gelatin as a bulking agent and glycerol as a non-toxic additive to induce physical crosslinking. We developed these inks optimizing printing efficacy and resolution for patient-specific geometries that can be used for soft tissue reconstruction. We demonstrated in vitro that the material was stable under physiological conditions and could be tuned to match soft tissue mechanical properties. We demonstrated in vivo that the material was biocompatible and could be tuned to maintain shape and volume up to three months while promoting cellular infiltration and tissue integration. PMID:27940389
Cyclic stretching of soft substrates induces spreading and growth
Cui, Yidan; Hameed, Feroz M.; Yang, Bo; Lee, Kyunghee; Pan, Catherine Qiurong; Park, Sungsu; Sheetz, Michael
2015-01-01
In the body, soft tissues often undergo cycles of stretching and relaxation that may affect cell behaviour without changing matrix rigidity. To determine whether transient forces can substitute for a rigid matrix, we stretched soft pillar arrays. Surprisingly, 1–5% cyclic stretching over a frequency range of 0.01–10 Hz caused spreading and stress fibre formation (optimum 0.1 Hz) that persisted after 4 h of stretching. Similarly, stretching increased cell growth rates on soft pillars comparative to rigid substrates. Of possible factors linked to fibroblast growth, MRTF-A (myocardin-related transcription factor-A) moved to the nucleus in 2 h of cyclic stretching and reversed on cessation; but YAP (Yes-associated protein) moved much later. Knockdown of either MRTF-A or YAP blocked stretch-dependent growth. Thus, we suggest that the repeated pulling from a soft matrix can substitute for a stiff matrix in stimulating spreading, stress fibre formation and growth. PMID:25704457
Rodriguez, María J; Brown, Joseph; Giordano, Jodie; Lin, Samuel J; Omenetto, Fiorenzo G; Kaplan, David L
2017-02-01
In the field of soft tissue reconstruction, custom implants could address the need for materials that can fill complex geometries. Our aim was to develop a material system with optimal rheology for material extrusion, that can be processed in physiological and non-toxic conditions and provide structural support for soft tissue reconstruction. To meet this need we developed silk based bioinks using gelatin as a bulking agent and glycerol as a non-toxic additive to induce physical crosslinking. We developed these inks optimizing printing efficacy and resolution for patient-specific geometries that can be used for soft tissue reconstruction. We demonstrated in vitro that the material was stable under physiological conditions and could be tuned to match soft tissue mechanical properties. We demonstrated in vivo that the material was biocompatible and could be tuned to maintain shape and volume up to three months while promoting cellular infiltration and tissue integration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sakuma, Noritsugu; Ohshima, Tsubasa; Shoji, Tetsuya; Suzuki, Yoshihito; Sato, Ryota; Wachi, Ayako; Kato, Akira; Kawai, Yoichiro; Manabe, Akira; Teranishi, Toshiharu
2011-04-26
Nanocomposite magnets (NCMs) consisting of hard and soft magnetic phases are expected to be instrumental in overcoming the current theoretical limit of magnet performance. In this study, structural analyses were performed on L1(0)-FePd/α-Fe NCMs with various hard/soft volume fractions, which were formed by annealing Pd/γ-Fe(2)O(3) heterostructured nanoparticles and pure Pd nanoparticles. The sample with a hard/soft volume ratio of 82/18 formed by annealing at 773 K had the largest maximum energy product (BH(max) = 10.3 MGOe). In such a sample, the interface between the hard and soft phases was coherent and the phase sizes were optimized, both of which effectively induced exchange coupling. This exchange coupling was directly observed by visualizing the magnetic interaction between the hard and soft phases using a first-order reversal curve diagram, which is a valuable tool to improve the magnetic properties of NCMs.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
Weak light emission of soft tissues induced by heating
NASA Astrophysics Data System (ADS)
Spinelli, Antonello E.; Durando, Giovanni; Boschi, Federico
2018-04-01
The main goal of this work is to show that soft tissue interaction with high-intensity focused ultrasound (HIFU) or direct heating leads to a weak light emission detectable using a small animal optical imaging system. Our results show that the luminescence signal is detectable after 30 min of heating, resembling the time scale of delayed luminescence. The imaging of a soft tissue after heating it using an HIFU field shows that the luminescence pattern closely matches the shape of the cone typical of the HIFU beam. We conclude that heating a soft tissue using two different sources leads to the emission of a weak luminescence signal from the heated region with a decay half-life of a few minutes (4 to 6 min). The origin of such light emission needs to be further investigated.
Orifice-induced pressure error studies in Langley 7- by 10-foot high-speed tunnel
NASA Technical Reports Server (NTRS)
Plentovich, E. B.; Gloss, B. B.
1986-01-01
For some time it has been known that the presence of a static pressure measuring hole will disturb the local flow field in such a way that the sensed static pressure will be in error. The results of previous studies aimed at studying the error induced by the pressure orifice were for relatively low Reynolds number flows. Because of the advent of high Reynolds number transonic wind tunnels, a study was undertaken to assess the magnitude of this error at high Reynolds numbers than previously published and to study a possible method of eliminating this pressure error. This study was conducted in the Langley 7- by 10-Foot High-Speed Tunnel on a flat plate. The model was tested at Mach numbers from 0.40 to 0.72 and at Reynolds numbers from 7.7 x 1,000,000 to 11 x 1,000,000 per meter (2.3 x 1,000,000 to 3.4 x 1,000,000 per foot), respectively. The results indicated that as orifice size increased, the pressure error also increased but that a porous metal (sintered metal) plug inserted in an orifice could greatly reduce the pressure error induced by the orifice.
TOPICAL REVIEW: Anatomical imaging for radiotherapy
NASA Astrophysics Data System (ADS)
Evans, Philip M.
2008-06-01
The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of scans is taken on different days. Both allow planning to account for variability intrinsic to the patient. Treatment verification has been carried out using a variety of technologies including: MV portal imaging, kV portal/fluoroscopy, MVCT, conebeam kVCT, ultrasound and optical surface imaging. The various methods have their pros and cons. The four x-ray methods involve an extra radiation dose to normal tissue. The portal methods may not generally be used to visualize soft tissue, consequently they are often used in conjunction with implanted fiducial markers. The two CT-based methods allow measurement of inter-fraction variation only. Ultrasound allows soft-tissue measurement with zero dose but requires skilled interpretation, and there is evidence of systematic differences between ultrasound and other data sources, perhaps due to the effects of the probe pressure. Optical imaging also involves zero dose but requires good correlation between the target and the external measurement and thus is often used in conjunction with an x-ray method. The use of anatomical imaging in radiotherapy allows treatment uncertainties to be determined. These include errors between the mean position at treatment and that at planning (the systematic error) and the day-to-day variation in treatment set-up (the random error). Positional variations may also be categorized in terms of inter- and intra-fraction errors. Various empirical treatment margin formulae and intervention approaches exist to determine the optimum strategies for treatment in the presence of these known errors. Other methods exist to try to minimize error margins drastically including the currently available breath-hold techniques and the tracking methods which are largely in development. This paper will review anatomical imaging techniques in radiotherapy and how they are used to boost the therapeutic benefit of the treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharkov, B. B.; Chizhik, V. I.; Dvinskikh, S. V., E-mail: sergeid@kth.se
2016-01-21
Dipolar recoupling is an essential part of current solid-state NMR methodology for probing atomic-resolution structure and dynamics in solids and soft matter. Recently described magic-echo amplitude- and phase-modulated cross-polarization heteronuclear recoupling strategy aims at efficient and robust recoupling in the entire range of coupling constants both in rigid and highly dynamic molecules. In the present study, the properties of this recoupling technique are investigated by theoretical analysis, spin-dynamics simulation, and experimentally. The resonance conditions and the efficiency of suppressing the rf field errors are examined and compared to those for other recoupling sequences based on similar principles. The experimental datamore » obtained in a variety of rigid and soft solids illustrate the scope of the method and corroborate the results of analytical and numerical calculations. The technique benefits from the dipolar resolution over a wider range of coupling constants compared to that in other state-of-the-art methods and thus is advantageous in studies of complex solids with a broad range of dynamic processes and molecular mobility degrees.« less
Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.
Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C
2015-05-13
This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.
A Novel Soft Pneumatic Artificial Muscle with High-Contraction Ratio.
Han, Kwanghyun; Kim, Nam-Ho; Shin, Dongjun
2018-06-20
There is a growing interest in soft actuators for human-friendly robotic applications. However, it is very challenging for conventional soft actuators to achieve both a large working distance and high force. To address this problem, we present a high-contraction ratio pneumatic artificial muscle (HCRPAM), which has a novel actuation concept. The HCRPAM can contract substantially while generating a large force suitable for a wide range of robotic applications. Our proposed prototyping method allows for an easy and quick fabrication, considering various design variables. We derived a mathematical model using a virtual work principle, and validated the model experimentally. We conducted simulations for the design optimization using this model. Our experimental results show that the HCRPAM has a 183.3% larger contraction ratio and 37.1% higher force output than the conventional pneumatic artificial muscle (McKibben muscle). Furthermore, the actuator has a compatible position tracking performance of 1.0 Hz and relatively low hysteresis error of 4.8%. Finally, we discussed the controllable bending characteristics of the HCRPAM, which uses heterogeneous materials and has an asymmetrical structure to make it comfortable for a human to wear.
Medium-Induced QCD Cascade: Democratic Branching and Wave Turbulence
NASA Astrophysics Data System (ADS)
Blaizot, J.-P.; Iancu, E.; Mehtar-Tani, Y.
2013-08-01
We study the average properties of the gluon cascade generated by an energetic parton propagating through a quark-gluon plasma. We focus on the soft, medium-induced emissions which control the energy transport at large angles with respect to the leading parton. We show that the effect of multiple branchings is important. In contrast with what happens in a usual QCD cascade in vacuum, medium-induced branchings are quasidemocratic, with offspring gluons carrying sizable fractions of the energy of their parent gluon. This results in an efficient mechanism for the transport of energy toward the medium, which is akin to wave turbulence with a scaling spectrum ˜1/ω. We argue that the turbulent flow may be responsible for the excess energy carried by very soft quanta, as revealed by the analysis of the dijet asymmetry observed in Pb-Pb collisions at the LHC.
Intraluminal bubble dynamics induced by lithotripsy shock wave
NASA Astrophysics Data System (ADS)
Song, Jie; Bai, Jiaming; Zhou, Yufeng
2016-12-01
Extracorporeal shock wave lithotripsy (ESWL) has been the first option in the treatment of calculi in the upper urinary tract since its introduction. ESWL-induced renal injury is also found after treatment and is assumed to associate with intraluminal bubble dynamics. To further understand the interaction of bubble expansion and collapse with the vessel wall, the finite element method (FEM) was used to simulate intraluminal bubble dynamics and calculate the distribution of stress in the vessel wall and surrounding soft tissue during cavitation. The effects of peak pressure, vessel size, and stiffness of soft tissue were investigated. Significant dilation on the vessel wall occurs after contacting with rapid and large bubble expansion, and then vessel deformation propagates in the axial direction. During bubble collapse, large shear stress is found to be applied to the vessel wall at a clinical lithotripter setting (i.e. 40 MPa peak pressure), which may be the mechanism of ESWL-induced vessel rupture. The decrease of vessel size and viscosity of soft tissue would enhance vessel deformation and, consequently, increase the generated shear stress and normal stresses. Meanwhile, a significantly asymmetric bubble boundary is also found due to faster axial bubble expansion and shrinkage than in radial direction, and deformation of the vessel wall may result in the formation of microjets in the axial direction. Therefore, this numerical work would illustrate the mechanism of ESWL-induced tissue injury in order to develop appropriate counteractive strategies for reduced adverse effects.
Chua, Hannah Daile P; Cheung, Lim Kwong
2012-07-01
The objective of this randomized controlled clinical trial was to compare the soft tissue changes after maxillary advancement using conventional orthognathic surgery (CO) and distraction osteogenesis (DO) in patients with cleft lip and palate (CLP). The study group of 39 CLP patients with maxillary hypoplasia underwent either CO or DO with 4 to 10 mm of maxillary advancement. Lateral cephalographs were taken preoperatively and postoperatively at regular intervals. A series of skeletal, dental, and soft tissue landmarks was used to evaluate the changes in the soft tissue and the correlation of hard and soft tissue changes and ratios. Significant differences were found between the CO and DO patients at A point in both maxillary advancement and downgrafting in the early follow-up period. On soft tissue landmarks of pronasale, subnasale, and labial superius, significant differences were found between the 2 groups at 6 months postoperatively only with maxillary advancement. There was better correlation of hard and soft tissue changes with maxillary advancement. The nasal projection was significantly different between the 2 groups at the early and intermediate period. There was much more consistent hard to soft tissue ratios in maxillary advancement with DO than with CO. Both CO and DO can induce significant soft tissue changes of the upper lip and nose, particularly with maxillary advancement. DO generates more consistent hard to soft tissue ratios. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Parisi, Alessandro; Argentiero, Ilenia; Fidelibus, Maria Dolores; Pellicani, Roberta; Spilotro, Giuseppe
2017-04-01
Considering a natural system without human-induced modifications, its resilience can be altered by many natural drivers (e.g. geological characteristics, climate) and their spatial modifications over time. Therefore, natural hazardous phenomena could shift natural system over tipping points in an easier or more difficult way. So long as natural system does not involve human settlements or transport infrastructures, natural system risk assessment could not be a basic topic. Nowadays, human activities have modified many natural systems forming, as a result, hybrid systems (both human and natural), in which natural and human-induced drivers modify hybrid systems vulnerability in order to decrease or increase their resilience: scientists define this new age Anthropocene. In this context, dynamic risk assessment of hybrid systems is required in order to avoid disaster when hazardous phenomena occur, but it is a quite complex issue. In fact, soft crisis emerging signals are difficult to identify because of wrong risk perception and lack of communication. Furthermore, natural and human-induced modifications are rarely registered and supervised by governments, so it is fairly difficult defining how systems resilience changes over time. Inhabitants of Ginosa (Taranto, South of Italy) had modified many old rock dwellings over thousand years since the Middle Ages. Indeed, they had built up three-storey houses on three hypogeum levels of rock dwellings along the ravine. The Matrice street collapse in Ginosa is an example of how natural and human-induced spatial modifications over time had led a soft crisis to evolve in a disaster, fortunately without fatalities. This research aim is to revisit events before the Matrice street collapse on the 21st January 2014. The will is to define the relationship between the hybrid system resilience and soft crisis variation over time and how human and natural drivers were involved in the shift.
[INVITED] On the mechanisms of single-pulse laser-induced backside wet etching
NASA Astrophysics Data System (ADS)
Tsvetkov, M. Yu.; Yusupov, V. I.; Minaev, N. V.; Akovantseva, A. A.; Timashev, P. S.; Golant, K. M.; Chichkov, B. N.; Bagratashvili, V. N.
2017-02-01
Laser-induced backside wet etching (LIBWE) of a silicate glass surface at interface with a strongly absorbing aqueous dye solution is studied. The process of crater formation and the generated optoacoustic signals under the action of single 5 ns laser pulses at the wavelength of 527 nm are investigated. The single-pulse mode is used to avoid effects of incubation and saturation of the etched depth. Significant differences in the mechanisms of crater formation in the ;soft; mode of laser action (at laser fluencies smaller than 150-170 J/cm2) and in the ;hard; mode (at higher laser fluencies) are observed. In the ;soft; single-pulse mode, LIBWE produces accurate craters with the depth of several hundred nanometers, good shape reproducibility and smooth walls. Estimates of temperature and pressure of the dye solution heated by a single laser pulse indicate that these parameters can significantly exceed the corresponding critical values for water. We consider that chemical etching of glass surface (or molten glass) by supercritical water, produced by laser heating of the aqueous dye solution, is the dominant mechanism responsible for the formation of crater in the ;soft; mode. In the ;hard; mode, the produced craters have ragged shape and poor pulse-to-pulse reproducibility. Outside the laser exposed area, cracks and splits are formed, which provide evidence for the shock induced glass fracture. By measuring the amplitude and spectrum of the generated optoacoustic signals it is possible to conclude that in the ;hard; mode of laser action, intense hydrodynamic processes induced by the formation and cavitation collapse of vapor-gas bubbles at solid-liquid interface are leading to the mechanical fracture of glass. The LIBWE material processing in the ;soft; mode, based on chemical etching in supercritical fluids (in particular, supercritical water) is very promising for structuring of optical materials.
Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1
2016-03-01
REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The
Radio frequency tags systems to initiate system processing
NASA Astrophysics Data System (ADS)
Madsen, Harold O.; Madsen, David W.
1994-09-01
This paper describes the automatic identification technology which has been installed at Applied Magnetic Corp. MR fab. World class manufacturing requires technology exploitation. This system combines (1) FluoroTrac cassette and operator tracking, (2) CELLworks cell controller software tools, and (3) Auto-Soft Inc. software integration services. The combined system eliminates operator keystrokes and errors during normal processing within a semiconductor fab. The methods and benefits of this system are described.
Studies Of Single-Event-Upset Models
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.
1988-01-01
Report presents latest in series of investigations of "soft" bit errors known as single-event upsets (SEU). In this investigation, SEU response of low-power, Schottky-diode-clamped, transistor/transistor-logic (TTL) static random-access memory (RAM) observed during irradiation by Br and O ions in ranges of 100 to 240 and 20 to 100 MeV, respectively. Experimental data complete verification of computer model used to simulate SEU in this circuit.
Clean galena, contaminated lead, and soft errors in memory chips
NASA Astrophysics Data System (ADS)
Lykken, G. I.; Hustoft, J.; Ziegler, B.; Momcilovic, B.
2000-10-01
Lead (Pb) disks were exposed to a radon (Rn)-rich atmosphere and surface alpha particle emissions were detected over time. Cumulative 210Po alpha emission increased nearly linearly with time. Conversely, cumulative emission for each of 218Po and 214Po was constant after one and two hours, respectively. Processing of radiation-free Pb ore (galena) in inert atmospheres was compared with processing in ambient air. Galena processed within a flux heated in a graphite crucible while exposed to an inert atmosphere, resulted in lead contaminated with 210Po (Trial 1). A glove box was next used to prepare a baseline radiation-free flux sample in an alumina crucible that was heated in an oven with an inert atmosphere (Trials 2 and 3). Ambient air was thereafter introduced, in place of the inert atmosphere, to the radiation-free flux mixture during processing (Trial 4). Ambient air introduced Rn and its progeny (RAD) into the flux during processing so that the processed Pb contained Po isotopes. A typical coke used in lead smelting also emitted numerous alpha particles. We postulate that alpha particles from tin/lead solder bumps, a cause of computer chip memory soft errors, may originate from Rn and RAD in the ambient air and/or coke used as a reducing agent in the standard galena smelting procedure.
Optical Assessment of Soft Contact Lens Edge-Thickness.
Tankam, Patrice; Won, Jungeun; Canavesi, Cristina; Cox, Ian; Rolland, Jannick P
2016-08-01
To assess the edge shape of soft contact lenses using Gabor-Domain Optical Coherence Microscopy (GD-OCM) with a 2-μm imaging resolution in three dimensions and to generate edge-thickness profiles at different distances from the edge tip of soft contact lenses. A high-speed custom-designed GD-OCM system was used to produce 3D images of the edge of an experimental soft contact lens (Bausch + Lomb, Rochester, NY) in four different configurations: in air, submerged into water, submerged into saline with contrast agent, and placed onto the cornea of a porcine eyeball. An algorithm to compute the edge-thickness was developed and applied to cross-sectional images. The proposed algorithm includes the accurate detection of the interfaces between the lens and the environment, and the correction of the refraction error. The sharply defined edge tip of a soft contact lens was visualized in 3D. Results showed precise thickness measurement of the contact lens edge profile. Fifty cross-sectional image frames for each configuration were used to test the robustness of the algorithm in evaluating the edge-thickness at any distance from the edge tip. The precision of the measurements was less than 0.2 μm. The results confirmed the ability of GD-OCM to provide high-definition images of soft contact lens edges. As a nondestructive, precise, and fast metrology tool for soft contact lens measurement, the integration of GD-OCM in the design and manufacturing of contact lenses will be beneficial for further improvement in edge design and quality control. In the clinical perspective, the in vivo evaluation of the lens fitted onto the cornea will advance our understanding of how the edge interacts with the ocular surface. The latter will provide insights into the impact of long-term use of contact lenses on the visual performance.
Optical Assessment of Soft Contact Lens Edge-Thickness
Tankam, Patrice; Won, Jungeun; Canavesi, Cristina; Cox, Ian; Rolland, Jannick P.
2016-01-01
Purpose To assess the edge shape of soft contact lenses using Gabor-Domain Optical Coherence Microscopy (GD-OCM) with a 2 μm imaging resolution in three dimensions, and to generate edge-thickness profiles at different distances from the edge tip of soft contact lenses. Methods A high-speed custom-designed GD-OCM system was used to produce 3D images of the edge of an experimental soft contact lens (Bausch + Lomb, Rochester NY) in four different configurations: in air, submerged into water, submerged into saline with contrast agent, and placed onto the cornea of a porcine eyeball. An algorithm to compute the edge-thickness was developed and applied to cross-sectional images. The proposed algorithm includes the accurate detection of the interfaces between the lens and the environment, and the correction of the refraction error. Results The sharply defined edge tip of a soft contact lens was visualized in 3D. Results showed precise thickness measurement of the contact lens edge profile. 50 cross-sectional image frames for each configuration were used to test the robustness of the algorithm in evaluating the edge-thickness at any distance from the edge tip. The precision of the measurements was less than 0.2 μm. Conclusions The results confirmed the ability of GD-OCM to provide high definition images of soft contact lens edges. As a non-destructive, precise, and fast metrology tool for soft contact lens measurement, the integration of GD-OCM in the design and manufacturing of contact lenses will be beneficial for further improvement in edge design and quality control. In the clinical perspective, the in-vivo evaluation of the lens fitted onto the cornea will advance our understanding of how the edge interacts with the ocular surface. The latter will provide insights into the impact of long-term use of contact lenses on the visual performance. PMID:27232902
García-Arroyo, Fernando E; Cristóbal, Magdalena; Arellano-Buendía, Abraham S; Osorio, Horacio; Tapia, Edilia; Soto, Virgilia; Madero, Magdalena; Lanaspa, Miguel A; Roncal-Jiménez, Carlos; Bankir, Lise; Johnson, Richard J; Sánchez-Lozada, Laura-Gabriela
2016-07-01
Recurrent dehydration, such as commonly occurs with manual labor in tropical environments, has been recently shown to result in chronic kidney injury, likely through the effects of hyperosmolarity to activate both vasopressin and aldose reductase-fructokinase pathways. The observation that the latter pathway can be directly engaged by simple sugars (glucose and fructose) leads to the hypothesis that soft drinks (which contain these sugars) might worsen rather than benefit dehydration associated kidney disease. Recurrent dehydration was induced in rats by exposure to heat (36°C) for 1 h/24 h followed by access for 2 h to plain water (W), a 11% fructose-glucose solution (FG, same composition as typical soft drinks), or water sweetened with noncaloric stevia (ST). After 4 wk plasma and urine samples were collected, and kidneys were examined for oxidative stress, inflammation, and injury. Recurrent heat-induced dehydration with ad libitum water repletion resulted in plasma and urinary hyperosmolarity with stimulation of the vasopressin (copeptin) levels and resulted in mild tubular injury and renal oxidative stress. Rehydration with 11% FG solution, despite larger total fluid intake, resulted in greater dehydration (higher osmolarity and copeptin levels) and worse renal injury, with activation of aldose reductase and fructokinase, whereas rehydration with stevia water had opposite effects. In animals that are dehydrated, rehydration acutely with soft drinks worsens dehydration and exacerbates dehydration associated renal damage. These studies emphasize the danger of drinking soft drink-like beverages as an attempt to rehydrate following dehydration. Copyright © 2016 the American Physiological Society.
Repair of cocaine-related oronasal fistula with forearm radial free flap.
Colletti, Giacomo; Allevi, Fabiana; Valassina, Davide; Bertossi, Dario; Biglioli, Federico
2013-01-01
Cocaine snorting may cause significant local ischemic necrosis and the destruction of nasal and midfacial bones and soft tissues, leading to the development of a syndrome called cocaine-induced midline destructive lesion. A review of the English-language literature reveals only a few articles describing the treatment of hard and/or soft palatal perforation related to cocaine inhalation. Described here are 4 patients with a history of cocaine abuse showing palatal lesions. From 2010 to 2013, a total of 4 patients affected by cocaine-related midline destructive lesions were referred to our department. They all presented signs of a cocaine-induced midline destructive lesion. They showed wide midfacial destruction involving the nasal septum as well as the hard and soft palates causing an ample oronasal communication. In 3 patients, oronasal communication has been treated successfully using a personal technique based on a partially de-epithelialized forearm free flap. The fourth patient had been treated only with local debridement because, when she came to our attention, her abusive habits were still unsolved. Different surgical options have been reported such as local, regional, and free flaps for hard and soft palate reconstruction. However, because of an unpredictable vascularization of the palatal tissues and owing to the scarceness of the local soft tissues, local flaps are at high risk for partial and complete failure. The transfer of free vascularized tissue, however, seems to be the most reliable and logical solution for medium- to large-sized fistulas. Among the various free flaps, we choose the radial forearm type because of the pedicle length and the flap thickness.
A comparison study of different facial soft tissue analysis methods.
Kook, Min-Suk; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Ryu, Sun-Youl; Cho, Jin-Hyoung; Lee, Jae-Seo; Yoon, Suk-Ja; Kim, Min-Soo; Shin, Hyo-Keun
2014-07-01
The purpose of this study was to evaluate several different facial soft tissue measurement methods. After marking 15 landmarks in the facial area of 12 mannequin heads of different sizes and shapes, facial soft tissue measurements were performed by the following 5 methods: Direct anthropometry, Digitizer, 3D CT, 3D scanner, and DI3D system. With these measurement methods, 10 measurement values representing the facial width, height, and depth were determined twice with a one week interval by one examiner. These data were analyzed with the SPSS program. The position created based on multi-dimensional scaling showed that direct anthropometry, 3D CT, digitizer, 3D scanner demonstrated relatively similar values, while the DI3D system showed slightly different values. All 5 methods demonstrated good accuracy and had a high coefficient of reliability (>0.92) and a low technical error (<0.9 mm). The measured value of the distance between the right and left medial canthus obtained by using the DI3D system was statistically significantly different from that obtained by using the digital caliper, digitizer and laser scanner (p < 0.05), but the other measured values were not significantly different. On evaluating the reproducibility of measurement methods, two measurement values (Ls-Li, G-Pg) obtained by using direct anthropometry, one measurement value (N'-Prn) obtained by using the digitizer, and four measurement values (EnRt-EnLt, AlaRt-AlaLt, ChRt-ChLt, Sn-Pg) obtained by using the DI3D system, were statistically significantly different. However, the mean measurement error in every measurement method was low (<0.7 mm). All measurement values obtained by using the 3D CT and 3D scanner did not show any statistically significant difference. The results of this study show that all 3D facial soft tissue analysis methods demonstrate favorable accuracy and reproducibility, and hence they can be used in clinical practice and research studies. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Read disturb errors in a CMOS static RAM chip. [radiation hardened for spacedraft
NASA Technical Reports Server (NTRS)
Wood, Steven H.; Marr, James C., IV; Nguyen, Tien T.; Padgett, Dwayne J.; Tran, Joe C.; Griswold, Thomas W.; Lebowitz, Daniel C.
1989-01-01
Results are reported from an extensive investigation into pattern-sensitive soft errors (read disturb errors) in the TCC244 CMOS static RAM chip. The TCC244, also known as the SA2838, is a radiation-hard single-event-upset-resistant 4 x 256 memory chip. This device is being used by the Jet Propulsion Laboratory in the Galileo and Magellan spacecraft, which will have encounters with Jupiter and Venus, respectively. Two aspects of the part's design are shown to result in the occurrence of read disturb errors: the transparence of the signal path from the address pins to the array of cells, and the large resistance in the Vdd and Vss lines of the cells in the center of the array. Probe measurements taken during a read disturb failure illustrate how address skews and the data pattern in the chip combine to produce a bit flip. A capacitive charge pump formed by the individual cell capacitances and the resistance in the supply lines pumps down both the internal cell voltage and the local supply voltage until a bit flip occurs.
Beam hardening correction in CT myocardial perfusion measurement
NASA Astrophysics Data System (ADS)
So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim
2009-05-01
This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.
Link Performance Analysis and monitoring - A unified approach to divergent requirements
NASA Astrophysics Data System (ADS)
Thom, G. A.
Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.
Classification Model for Forest Fire Hotspot Occurrences Prediction Using ANFIS Algorithm
NASA Astrophysics Data System (ADS)
Wijayanto, A. K.; Sani, O.; Kartika, N. D.; Herdiyeni, Y.
2017-01-01
This study proposed the application of data mining technique namely Adaptive Neuro-Fuzzy inference system (ANFIS) on forest fires hotspot data to develop classification models for hotspots occurrence in Central Kalimantan. Hotspot is a point that is indicated as the location of fires. In this study, hotspot distribution is categorized as true alarm and false alarm. ANFIS is a soft computing method in which a given inputoutput data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The method of this study classified hotspots as target objects by correlating spatial attributes data using three folds in ANFIS algorithm to obtain the best model. The best result obtained from the 3rd fold provided low error for training (error = 0.0093676) and also low error testing result (error = 0.0093676). Attribute of distance to road is the most determining factor that influences the probability of true and false alarm where the level of human activities in this attribute is higher. This classification model can be used to develop early warning system of forest fire.
De Rosario, Helios; Page, Alvaro; Mata, Vicente
2014-05-07
This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation. Copyright © 2014 Elsevier Ltd. All rights reserved.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Error Analysis and Validation for Insar Height Measurement Induced by Slant Range
NASA Astrophysics Data System (ADS)
Zhang, X.; Li, T.; Fan, W.; Geng, X.
2018-04-01
InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.
SU-E-J-125: Classification of CBCT Noises in Terms of Their Contribution to Proton Range Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brousmiche, S; Orban de Xivry, J; Macq, B
2014-06-01
Purpose: This study assesses the potential use of CBCT images in adaptive protontherapy by estimating the contribution of the main sources of noise and calibration errors to the proton range uncertainty. Methods: Measurements intended to highlight each particular source have been achieved by adapting either the testbench configuration, e.g. use of filtration, fan-beam collimation, beam stop arrays, phantoms and detector reset light, or the sequence of correction algorithms including water precorrection. Additional Monte-Carlo simulations have been performed to complement these measurements, especially for the beam hardening and the scatter cases. Simulations of proton beams penetration through the resulting images havemore » then been carried out to quantify the range change due to these effects. The particular case of a brain irradiation is considered mainly because of the multiple effects that the skull bones have on the internal soft tissues. Results: On top of the range error sources is the undercorrection of scatter. Its influence has been analyzed from a comparison of fan-beam and full axial FOV acquisitions. In this case, large range errors of about 12 mm can be reached if the assumption is made that the scatter has only a constant contribution over the projection images. Even the detector lag, which a priori induces a much smaller effect, has been shown to contribute for up to 2 mm to the overall error if its correction only aims at reducing the skin artefact. This last result can partially be understood by the larger interface between tissues and bones inside the skull. Conclusion: This study has set the basis of a more systematical analysis of the effect CBCT noise on range uncertainties based on a combination of measurements, simulations and theoretical results. With our method, even more subtle effects such as the cone-beam artifact or the detector lag can be assessed. SBR and JOR are financed by iMagX, a public-private partnership between the region Wallone of Belgium and IBA under convention #1217662.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sick, J; Rancilio, N; Fulkerson, C
Purpose: Ultrasound (US) is a noninvasive, nonradiographic imaging technique with high spatial and temporal resolution that can be used for localizing soft-tissue structures and tumors in real-time during radiotherapy (inter- and intra-fraction). A detailed methodology integrating 3D-US within RT is presented. This method is easier to adopt into current treatment protocol than current US based systems and reduces user variability for image acquisition, thus eliminating transducer induced changes that limit CT planning system. Methods: We designed an in-house integrated US manipulator and platform to relate CT, 3D-US and linear accelerator coordinate systems. To validate the platform, an agar-based phantom withmore » measured densities and speed-of-sound consistent with tissues surrounding the bladder, was rotated (0–45°) resulting in translations (up to 55mm) relative to the CT and US coordinate systems. After acquiring and integrating CT and US images into the treatment planning system, US-to-US and US-to-CT images were co-registered to re-align the phantom relative to the linear accelerator. Errors in the transformation matrix components were calculate to determine precision of this method under different patient positions. Results: Statistical errors from US-US registrations for different patient orientations ranged from 0.06–1.66mm for x, y, and z translational components, and 0.00–1.05° for rotational components. Statistical errors from US-CT registrations were 0.23–1.18mm for the x, y and z translational components, and 0.08–2.52° for the rotational components. Conclusion: Based on our result, this is consistent with currently used techniques for positioning prostate patients if couch re-positioning is less than a 5 degree rotation. We are now testing this on a dog patient to obtain both inter and intra-fractional positional errors. Additional design considerations include the future use of ultrasound-based functionality (photoacoustics, radioacoustics, Doppler) to monitor blood flow and hypoxia and/or in-vivo dosimetry for applications in other therapeutic techniques, such as hyperthermia, anti-angiogenesis, and particle therapy.« less
Regulation of error-prone translesion synthesis by Spartan/C1orf124
Kim, Myoung Shin; Machida, Yuka; Vashisht, Ajay A.; Wohlschlegel, James A.; Pang, Yuan-Ping; Machida, Yuichi J.
2013-01-01
Translesion synthesis (TLS) employs low fidelity polymerases to replicate past damaged DNA in a potentially error-prone process. Regulatory mechanisms that prevent TLS-associated mutagenesis are unknown; however, our recent studies suggest that the PCNA-binding protein Spartan plays a role in suppression of damage-induced mutagenesis. Here, we show that Spartan negatively regulates error-prone TLS that is dependent on POLD3, the accessory subunit of the replicative DNA polymerase Pol δ. We demonstrate that the putative zinc metalloprotease domain SprT in Spartan directly interacts with POLD3 and contributes to suppression of damage-induced mutagenesis. Depletion of Spartan induces complex formation of POLD3 with Rev1 and the error-prone TLS polymerase Pol ζ, and elevates mutagenesis that relies on POLD3, Rev1 and Pol ζ. These results suggest that Spartan negatively regulates POLD3 function in Rev1/Pol ζ-dependent TLS, revealing a previously unrecognized regulatory step in error-prone TLS. PMID:23254330
Sun, Zhi-jian; Jin, Jin; Qiu, Gui-xing; Gao, Peng; Liu, Yong
2015-02-26
Tumor-induced osteomalacia (TIO) is a rare syndrome typically caused by mesenchymal tumors. It has been shown that complete tumor resection may be curative. However, to our knowledge, there has been no report of a large cohort to exam different surgical approaches. This study was aimed to assess outcomes of different surgical options of patients with tumor-induced osteomalacia at a single institution. Patients with extremity tumors treated in our hospital from January, 2004 to July, 2012 were identified. The minimum follow-up period was 12 months. Patient's demography, tumor location, preoperative preparation, type of surgeries were summarized, and clinical outcomes were recorded. Successful treatment was defined as significant symptom improvement, normal serum phosphorus and significant improvement or normalization of bone mineral density at the last follow-up. Differences between patients with soft tissue tumors and bone tumors were compared. There were 40 (24 male and 16 female) patients identified, with an average age of 44 years. The tumors were isolated in either soft tissue (25 patients) or bone (12 patients) and combined soft tissue and bone invasion was observed in 3 patients. For the primary surgery, tumor resection and tumor curettage were performed. After initial surgical treatment, six patients then received a second surgery. Four patients were found to have malignant tumors base on histopathology. With a minimum follow-up period of 12 months, 80% of patients (32/40) were treated successfully, including 50% of patients (2/4) with malignant tumors. Compared to patients with bone tumor, surgical results were better in patient with soft tissue tumor. Surgical treatment was an effective way for TIO. Other than tumor curettage surgery, tumor resection is the preferred options for these tumors.
Bosy-Westphal, Anja; Later, Wiebke; Schautz, Britta; Lagerpusch, Merit; Goele, Kristin; Heller, Martin; Glüer, Claus-C; Müller, Manfred J
2011-07-01
Recent studies report a significant gain in bone mineral density (BMD) after diet-induced weight loss. This might be explained by a measurement artefact. We therefore investigated the impact of intra- and extra-osseous soft tissue composition on bone measurements by dual X-ray absorptiometry (DXA) in a longitudinal study of diet-induced weight loss and regain in 55 women and 17 men (19-46 years, BMI 28.2-46.8 kg/m(2)). Total and regional BMD were measured before and after 12.7 ± 2.2 week diet-induced weight loss and 6 months after significant weight regain (≥30%). Hydration of fat free mass (FFM) was assessed by a 3-compartment model. Skeletal muscle (SM) mass, extra-osseous adipose tissue, and bone marrow were measured by whole body magnetic resonance imaging (MRI). Mean weight loss was -9.2 ± 4.4 kg (P < 0.001) and was followed by weight regain in a subgroup of 24 subjects (+6.3 ± 2.9 kg; P < 0.001). With weight loss, bone marrow and extra-osseous adipose tissue decreased whereas BMD increased at the total body, lumbar spine, and the legs (women only) but decreased at the pelvis (men only, all P < 0.05). The decrease in BMD(pelvis) correlated with the loss in visceral adipose tissue (VAT) (P < 0.05). Increases in BMD(legs) were reversed after weight regain and inversely correlated with BMD(legs) decreases. No other associations between changes in BMD and intra- or extra-osseous soft tissue composition were found. In conclusion, changes in extra-osseous soft tissue composition had a minor contribution to changes in BMD with weight loss and decreases in bone marrow adipose tissue (BMAT) were not related to changes in BMD.
HYPAZ: Hypertension Induced by Pazopanib
2016-01-04
Renal Cell Carcinoma; Soft Tissue Sarcoma; Glioblastoma; Ovarian Cancer; Cervical Cancer; Breast Cancer; Non-small Cell Lung Cancer; Small Cell Lung Cancer; Pancreatic Cancer; Melanoma; Gastrointestinal Cancer
Tailoring superelasticity of soft magnetic materials
NASA Astrophysics Data System (ADS)
Cremer, Peet; Löwen, Hartmut; Menzel, Andreas M.
2015-10-01
Embedding magnetic colloidal particles in an elastic polymer matrix leads to smart soft materials that can reversibly be addressed from outside by external magnetic fields. We discover a pronounced nonlinear superelastic stress-strain behavior of such materials using numerical simulations. This behavior results from a combination of two stress-induced mechanisms: a detachment mechanism of embedded particle aggregates and a reorientation mechanism of magnetic moments. The superelastic regime can be reversibly tuned or even be switched on and off by external magnetic fields and thus be tailored during operation. Similarities to the superelastic behavior of shape-memory alloys suggest analogous applications, with the additional benefit of reversible switchability and a higher biocompatibility of soft materials.
Li, Beiwen; Liu, Ziping; Zhang, Song
2016-10-03
We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.
Favia, G; Mariggio, M A; Maiorano, F; Cassano, A; Capodiferro, S; Ribatti, D
2008-01-01
In this study we investigated the property of a new medical substance, in the form of a gel compound containing four aminoacids (glycine, leucine, proline, lysine) and sodium hyaluronate (AMINOGAM), to accelerate the wound healing process of the soft oral tissues and to promote angiogenesis in vivo in the vascular proliferation in chick embryo chorioallantoic membrane (CAM) assay. Furthermore, we investigated the capacity of AMINOGAM to induce the expression of an angiogenic cytokine, namely vascular endothelial growth factor (VEGF) in human fibroblasts in vitro. Results showed that AMINOGAM promoted wound healing in post-surgical wounds (after teeth extraction, oral laser surgery with secondary healing without direct suture of the surgical wound, and after dental implant insertion). Stimulated angiogenesis in vivo in the CAM assay and the response was similar to that obtained with vascular endothelial growth factor, a well-known angiogenic cytokine, tested in the same assay, and confirmed by clinical outcomes, which showed reduction of the healing time of oral soft tissues after three different kinds of surgery and also the absence of post-operative infections.
Petrungaro, Paul S; Gonzalez, Santiago; Villegas, Carlos
2018-02-01
As dental implants become more popular for the treatment of partial and total edentulism and treatment of "terminal dentitions," techniques for the management of the atrophic posterior maxillae continue to evolve. Although dental implants carry a high success rate long term, attention must be given to the growing numbers of revisions or retreatment of cases that have had previous dental implant treatment and/or advanced bone replacement procedures that, due to either poor patient compliance, iatrogenic error, or poor quality of the pre-existing alveolar and/or soft tissues, have led to large osseous defects, possibly with deficient soft-tissue volume. In the posterior maxillae, where the poorest quality of bone in the oral cavity exists, achieving regeneration of the alveolar bone and adequate volume of soft tissue remains a complex procedure. This is made even more difficult when dealing with loss of dental implants previously placed, aggressive bone reduction required in various implant procedures, and/or residual sinus infections precluding proper closure of the oral wound margins. The purpose of this article is to outline a technique for the total closure of large oro-antral communications, with underlying osseous defects greater than 15 mm in width and 30 mm in length, for which multiple previous attempts at closure had failed, to achieve not only the reconstruction of adequate volume and quality of soft tissues in the area of the previous fistula, but also total regeneration of the osseous structures in the area of the large void.
IMRT QA: Selecting gamma criteria based on error detection sensitivity.
Steers, Jennifer M; Fraass, Benedick A
2016-04-01
The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander
2015-04-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher
2015-01-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cai, Jian-Hua
2017-09-01
To eliminate the random error of the derivative near-IR (NIR) spectrum and to improve model stability and the prediction accuracy of the gluten protein content, a combined method is proposed for pretreatment of the NIR spectrum based on both empirical mode decomposition and the wavelet soft-threshold method. The principle and the steps of the method are introduced and the denoising effect is evaluated. The wheat gluten protein content is calculated based on the denoised spectrum, and the results are compared with those of the nine-point smoothing method and the wavelet soft-threshold method. Experimental results show that the proposed combined method is effective in completing pretreatment of the NIR spectrum, and the proposed method improves the accuracy of detection of wheat gluten protein content from the NIR spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, T; Kumaraswamy, L
Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10more » CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.« less
Estimation of the laser cutting operating cost by support vector regression methodology
NASA Astrophysics Data System (ADS)
Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam
2016-09-01
Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.
Speech outcome after early repair of cleft soft palate using Furlow technique.
Abdel-Aziz, Mosaad
2013-01-01
The earlier closure of palatal cleft is the better the speech outcome and the less compensatory articulation errors, however dissection on the hard palate may interfere with facial growth. In Furlow palatoplasty, dissection on the hard palate is not needed and surgery is usually limited to the soft palate, so the technique has no deleterious effect on the facial growth. The aim of this study was to assess the efficacy of Furlow palatoplasty technique on the speech of young infants with cleft soft palate. Twenty-one infants with cleft soft palate were included in this study, their ages ranged from 3 to 6 months. Their clefts were repaired using Furlow technique. The patients were followed up for at least 4 years; at the end of the follow up period they were subjected to flexible nasopharyngoscopy to assess the velopharyngeal closure and speech analysis using auditory perceptual assessment. Eighteen cases (85.7%) showed complete velopharyngeal closure, 1 case (4.8%) showed borderline competence, and 2 cases (9.5%) showed borderline incompetence. Normal resonance has been attained in 18 patients (85.7%), and mild hypernasality in 3 patients (14.3%), no patients demonstrated nasal emission of air. Speech therapy was beneficial for cases with residual hypernasality; no cases needed secondary corrective surgery. Furlow palatoplasty at a younger age has favorable speech outcome with no detectable morbidity. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Karuppanan, Udayakumar; Unni, Sujatha Narayanan; Angarai, Ganesan R
2017-01-01
Assessment of mechanical properties of soft matter is a challenging task in a purely noninvasive and noncontact environment. As tissue mechanical properties play a vital role in determining tissue health status, such noninvasive methods offer great potential in framing large-scale medical screening strategies. The digital speckle pattern interferometry (DSPI)-based image capture and analysis system described here is capable of extracting the deformation information from a single acquired fringe pattern. Such a method of analysis would be required in the case of the highly dynamic nature of speckle patterns derived from soft tissues while applying mechanical compression. Soft phantoms mimicking breast tissue optical and mechanical properties were fabricated and tested in the DSPI out of plane configuration set up. Hilbert transform (HT)-based image analysis algorithm was developed to extract the phase and corresponding deformation of the sample from a single acquired fringe pattern. The experimental fringe contours were found to correlate with numerically simulated deformation patterns of the sample using Abaqus finite element analysis software. The extracted deformation from the experimental fringe pattern using the HT-based algorithm is compared with the deformation value obtained using numerical simulation under similar conditions of loading and the results are found to correlate with an average %error of 10. The proposed method is applied on breast phantoms fabricated with included subsurface anomaly mimicking cancerous tissue and the results are analyzed.
Wu, Jinpeng; Sallis, Shawn; Qiao, Ruimin; Li, Qinghao; Zhuo, Zengqing; Dai, Kehua; Guo, Zixuan; Yang, Wanli
2018-04-17
Energy storage has become more and more a limiting factor of today's sustainable energy applications, including electric vehicles and green electric grid based on volatile solar and wind sources. The pressing demand of developing high-performance electrochemical energy storage solutions, i.e., batteries, relies on both fundamental understanding and practical developments from both the academy and industry. The formidable challenge of developing successful battery technology stems from the different requirements for different energy-storage applications. Energy density, power, stability, safety, and cost parameters all have to be balanced in batteries to meet the requirements of different applications. Therefore, multiple battery technologies based on different materials and mechanisms need to be developed and optimized. Incisive tools that could directly probe the chemical reactions in various battery materials are becoming critical to advance the field beyond its conventional trial-and-error approach. Here, we present detailed protocols for soft X-ray absorption spectroscopy (sXAS), soft X-ray emission spectroscopy (sXES), and resonant inelastic X-ray scattering (RIXS) experiments, which are inherently elemental-sensitive probes of the transition-metal 3d and anion 2p states in battery compounds. We provide the details on the experimental techniques and demonstrations revealing the key chemical states in battery materials through these soft X-ray spectroscopy techniques.
Intraoperative Radiation Therapy: Characterization and Application
1989-03-01
difficult to obtain. Notably, carcinomas of the pancreas, stomach, colon, and rectum, and sarcomas of soft tissue are prime candidates for IORT (2:131...Their pioneering efforts served as the basis for all my work. Mr. John Brohas of the AFIT Model Fabrication Shop aided my efforts considerably by... fabricated to set the collimator jaws to the required 10 cm x 10 cm aperture. The necessary parts are available from Varian. This will help eliminate errors
Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A
2015-11-05
Soft tissue artifact (STA) distort marker-based knee kinematics measures and make them difficult to use in clinical practice. None of the current methods designed to compensate for STA is suitable, but multi-body optimization (MBO) has demonstrated encouraging results and can be improved. The goal of this study was to develop and validate the performance of knee joint models, with anatomical and subject-specific kinematic constraints, used in MBO to reduce STA errors. Twenty subjects were recruited: 10 healthy and 10 osteoarthritis (OA) subjects. Subject-specific knee joint models were evaluated by comparing dynamic knee kinematics recorded by a motion capture system (KneeKG™) and optimized with MBO to quasi-static knee kinematics measured by a low-dose, upright, biplanar radiographic imaging system (EOS(®)). Errors due to STA ranged from 1.6° to 22.4° for knee rotations and from 0.8 mm to 14.9 mm for knee displacements in healthy and OA subjects. Subject-specific knee joint models were most effective in compensating for STA in terms of abduction-adduction, inter-external rotation and antero-posterior displacement. Root mean square errors with subject-specific knee joint models ranged from 2.2±1.2° to 6.0±3.9° for knee rotations and from 2.4±1.1 mm to 4.3±2.4 mm for knee displacements in healthy and OA subjects, respectively. Our study shows that MBO can be improved with subject-specific knee joint models, and that the quality of the motion capture calibration is critical. Future investigations should focus on more refined knee joint models to reproduce specific OA knee geometry and physiology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong
2017-07-01
Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sweeney, K.; Major, J. J.
2016-12-01
Advances in structure-from-motion (SfM) photogrammetry and point cloud comparison have fueled a proliferation of studies using modern imagery to monitor geomorphic change. These techniques also have obvious applications for reconstructing historical landscapes from vertical aerial imagery, but known challenges include insufficient photo overlap, systematic "doming" induced by photo-spacing regularity, missing metadata, and lack of ground control. Aerial imagery of landscape change in the North Fork Toutle River (NFTR) following the 1980 eruption of Mount St. Helens is a prime dataset to refine methodologies. In particular, (1) 14-μm film scans are available for 1:9600 images at 4-month intervals from 1980 - 1986, (2) the large magnitude of landscape change swamps systematic error and noise, and (3) stable areas (primary deposit features, roads, etc.) provide targets for both ground control and matching to modern lidar. Using AgiSoft PhotoScan, we create digital surface models from the NFTR imagery and examine how common steps in SfM workflows affect results. Tests of scan quality show high-resolution, professional film scans are superior to office scans of paper prints, reducing spurious points related to scan infidelity and image damage. We confirm earlier findings that cropping and rotating images improves point matching and the final surface model produced by the SfM algorithm. We demonstrate how the iterative closest point algorithm, implemented in CloudCompare and using modern lidar as a reference dataset, can serve as an adequate substitute for absolute ground control. Elevation difference maps derived from our surface models of Mount St. Helens show patterns consistent with field observations, including channel avulsion and migration, though systematic errors remain. We suggest that subtracting an empirical function fit to the long-wavelength topographic signal may be one avenue for correcting systematic error in similar datasets.
Martin, James E.; Swol, Frank Van
2015-07-10
We show that multiaxial fields can induce time-averaged, noncentrosymmetric interactions between particles having polarization anisotropy, yet the multiaxial field itself does not exert either a force or a torque on an isolated particle. These induced interactions lead to particle assemblies whose energy is strongly dependent on both the translational and orientational degrees of freedom of the system. The situation is similar to a collection of permanent dipoles, but the symmetry of the time-averaged interaction is quite distinct, and the scale of the system energy can be dynamically controlled by the magnitude of the applied multiaxial field. In our paper, themore » case of polarizable rods is considered in detail, and it is suggested that collections of rods embedded in spheres can be used to create a material with a dynamically tunable magnetic permeability or dielectric permittivity. We report on Monte Carlo simulations performed to investigate the behavior of assemblies of both multiaxial-field induced dipoles and permanent dipoles arranged onto two-dimensional lattices. Lastly, the ground state of the induced dipoles is an orientational soft mode of aligned dipoles, whereas that of the permanent dipoles is a vortex state.« less
NASA Astrophysics Data System (ADS)
Liu, Liping; Sharma, Pradeep
2018-03-01
Soft robotics, energy harvesting, large-deformation sensing and actuation, are just some of the applications that can be enabled by soft dielectrics that demonstrate substantive electromechanical coupling. Most soft dielectrics including elastomers, however, are not piezoelectric and rely on the universally present electrostriction and the Maxwell stress effect to enable the aforementioned applications. Electrostriction is a one-way electromechanical coupling and the induced elastic strain scales as (∝E2) upon the application of an electric field, E. The quadratic dependence of electrostriction on the electric field and the one-way coupling imply that, (i) A rather high voltage is required to induce appreciable strain, (ii) reversal of an applied bias will not reverse the sign of the deformation, and (iii) since it is a one-way coupling i.e. electrical stimuli may cause mechanical deformation but electricity cannot be generated by mechanical deformation, prospects for energy harvesting are rather difficult. An interesting approach for realizing an apparent piezoelectric-like behavior is to dope soft dielectrics with immobile charges and dipoles. Such materials, called electrets, are rather unique composites where a secondary material (in principle) is not necessary. Both experiments and supporting theoretical work have shown that soft electrets can exhibit a very large electromechanical coupling including a piezoelectric-like response. In this work, we present a homogenization theory for electret materials and provide, in addition to several general results, variational bounds and closed-form expressions for specific microstructures such as laminates and ellipsoidal inclusions. While we consider the nonlinear coupled problem, to make analytical progress, we work within the small-deformation setting. The specific conditions necessary to obtain a piezoelectric-like response and enhanced electrostriction are highlighted. There are very few universal, microstructure-independent exact results in the theory of composites. We succeed in establishing several such relations in the context of electrets.
Suppression of Ghrelin Exacerbates HFCS-Induced Adiposity and Insulin Resistance
Ma, Xiaojun; Lin, Ligen; Yue, Jing; Wu, Chia-Shan; Guo, Cathy A.; Wang, Ruitao; Yu, Kai-Jiang; Devaraj, Sridevi; Murano, Peter; Chen, Zheng; Sun, Yuxiang
2017-01-01
High fructose corn syrup (HFCS) is widely used as sweetener in processed foods and soft drinks in the United States, largely substituting sucrose (SUC). The orexigenic hormone ghrelin promotes obesity and insulin resistance; ghrelin responds differently to HFCS and SUC ingestion. Here we investigated the roles of ghrelin in HFCS- and SUC-induced adiposity and insulin resistance. To mimic soft drinks, 10-week-old male wild-type (WT) and ghrelin knockout (Ghrelin−/−) mice were subjected to ad lib. regular chow diet supplemented with either water (RD), 8% HFCS (HFCS), or 10% sucrose (SUC). We found that SUC-feeding induced more robust increases in body weight and body fat than HFCS-feeding. Comparing to SUC-fed mice, HFCS-fed mice showed lower body weight but higher circulating glucose and insulin levels. Interestingly, we also found that ghrelin deletion exacerbates HFCS-induced adiposity and inflammation in adipose tissues, as well as whole-body insulin resistance. Our findings suggest that HFCS and SUC have differential effects on lipid metabolism: while sucrose promotes obesogenesis, HFCS primarily enhances inflammation and insulin resistance, and ghrelin confers protective effects for these metabolic dysfunctions. PMID:28629187
Suppression of Ghrelin Exacerbates HFCS-Induced Adiposity and Insulin Resistance.
Ma, Xiaojun; Lin, Ligen; Yue, Jing; Wu, Chia-Shan; Guo, Cathy A; Wang, Ruitao; Yu, Kai-Jiang; Devaraj, Sridevi; Murano, Peter; Chen, Zheng; Sun, Yuxiang
2017-06-19
High fructose corn syrup (HFCS) is widely used as sweetener in processed foods and soft drinks in the United States, largely substituting sucrose (SUC). The orexigenic hormone ghrelin promotes obesity and insulin resistance; ghrelin responds differently to HFCS and SUC ingestion. Here we investigated the roles of ghrelin in HFCS- and SUC-induced adiposity and insulin resistance. To mimic soft drinks, 10-week-old male wild-type (WT) and ghrelin knockout ( Ghrelin -/- ) mice were subjected to ad lib. regular chow diet supplemented with either water (RD), 8% HFCS (HFCS), or 10% sucrose (SUC). We found that SUC-feeding induced more robust increases in body weight and body fat than HFCS-feeding. Comparing to SUC-fed mice, HFCS-fed mice showed lower body weight but higher circulating glucose and insulin levels. Interestingly, we also found that ghrelin deletion exacerbates HFCS-induced adiposity and inflammation in adipose tissues, as well as whole-body insulin resistance. Our findings suggest that HFCS and SUC have differential effects on lipid metabolism: while sucrose promotes obesogenesis, HFCS primarily enhances inflammation and insulin resistance, and ghrelin confers protective effects for these metabolic dysfunctions.
Error-Eliciting Problems: Fostering Understanding and Thinking
ERIC Educational Resources Information Center
Lim, Kien H.
2014-01-01
Student errors are springboards for analyzing, reasoning, and justifying. The mathematics education community recognizes the value of student errors, noting that "mistakes are seen not as dead ends but rather as potential avenues for learning." To induce specific errors and help students learn, choose tasks that might produce mistakes.…
Gain dynamics in a soft X-ray laser ampli er perturbed by a strong injected X-ray eld
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yong; Wang, Shoujun; Oliva, E
2014-01-01
Seeding soft X-ray plasma ampli ers with high harmonics has been demonstrated to generate high-brightness soft X-ray laser pulses with full spatial and temporal coherence. The interaction between the injected coherent eld and the swept-gain medium has been modelled. However, no exper- iment has been conducted to probe the gain dynamics when perturbed by a strong external seed eld. Here, we report the rst X-ray pump X-ray probe measurement of the nonlinear response of a plasma ampli er perturbed by a strong soft X-ray ultra-short pulse. We injected a sequence of two time-delayed high-harmonic pulses (l518.9 nm) into a collisionallymore » excited nickel-like molybdenum plasma to measure with femto-second resolution the gain depletion induced by the saturated ampli cation of the high-harmonic pump and its subsequent recovery. The measured fast gain recovery in 1.5 1.75 ps con rms the possibility to generate ultra-intense, fully phase-coherent soft X-ray lasers by chirped pulse ampli cation in plasma ampli ers.« less
Effects of Reynolds number on orifice induced pressure error
NASA Technical Reports Server (NTRS)
Plentovich, E. B.; Gloss, B. B.
1982-01-01
Data previously reported for orifice induced pressure errors are extended to the case of higher Reynolds number flows, and a remedy is presented in the form of a porous metal plug for the orifice. Test orifices with apertures 0.330, 0.660, and 1.321 cm in diam. were fabricated on a flat plate for trials in the NASA Langley wind tunnel at Mach numbers 0.40-0.72. A boundary layer survey rake was also mounted on the flat plate to allow measurement of the total boundary layer pressures at the orifices. At the high Reynolds number flows studied, the orifice induced pressure error was found to be a function of the ratio of the orifice diameter to the boundary layer thickness. The error was effectively eliminated by the insertion of a porous metal disc set flush with the orifice outside surface.
Schaser, Klaus-Dieter; Disch, Alexander C; Stover, John F; Lauffer, Annette; Bail, Herman J; Mittlmeier, Thomas
2007-01-01
Closed soft tissue injury induces progressive microvascular dysfunction and regional inflammation. The authors tested the hypothesis that adverse trauma-induced effects can be reduced by local cooling. While superficial cooling reduces swelling, pain, and cellular oxygen demand, the effects of cryotherapy on posttraumatic microcirculation are incompletely understood. Controlled laboratory study. After a standardized closed soft tissue injury to the left tibial compartment, male rats were randomly subjected to percutaneous perfusion for 6 hours with 0.9% NaCL (controls; room temperature) or cold NaCL (cryotherapy; 8 degrees C) (n = 7 per group). Uninjured rats served as shams (n = 7). Microcirculatory changes and leukocyte adherence were determined by intravital microscopy. Intramuscular pressure was measured, and invasion of granulocytes and macrophages was assessed by immunohistochemistry. Edema and tissue damage was quantified by gravimetry and decreased desmin staining. Closed soft tissue injury significantly decreased functional capillary density (240 +/- 12 cm(-1)); increased microvascular permeability (0.75 +/- 0.03), endothelial leukocyte adherence (995 +/- 77/cm(2)), granulocyte (182.0 +/- 25.5/mm(2)) and macrophage infiltration, edema formation, and myonecrosis (ratio: 2.95 +/- 0.45) within the left extensor digitorum longus muscle. Cryotherapy for 6 hours significantly restored diminished functional capillary density (393 +/- 35), markedly decreased elevated intramuscular pressure, reduced the number of adhering (462 +/- 188/cm(2)) and invading granulocytes (119 +/- 28), and attenuated tissue damage (ratio: 1.7 +/- 0.17). The hypothesis that prolonged cooling reduces posttraumatic microvascular dysfunction, inflammation, and structural impairment was confirmed. These results may have therapeutic implications as cryotherapy after closed soft tissue injury is a valuable therapeutic approach to improve nutritive perfusion and attenuate leukocyte-mediated tissue destruction. The risk for evolving compartment syndrome may be reduced, thereby preventing further irreversible aggravation.
Adaptation to sensory-motor reflex perturbations is blind to the source of errors.
Hudson, Todd E; Landy, Michael S
2012-01-06
In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.
Capgermacrenes A and B, Bioactive Secondary Metabolites from a Bornean Soft Coral, Capnella sp.
Phan, Chin-Soon; Ng, Shean-Yeaw; Kim, Eun-A; Jeon, You-Jin; Palaniveloo, Kishneth; Santhanaraju Vairappan, Charles
2015-01-01
Two new bicyclogermacrenes, capgermacrenes A (1) and B (2), were isolated with two known compounds, palustrol (3) and litseagermacrane (4), from a population of Bornean soft coral Capnella sp. The structures of these metabolites were elucidated based on spectroscopic data. Compound 1 was found to inhibit the accumulation of the LPS-induced pro-inflammatory IL-1β and NO production by down-regulating the expression of iNOS protein in RAW 264.7 macrophages. PMID:25996100
Visible-Light Modulation on Lattice Dielectric Responses of a Piezo-Phototronic Soft Material.
Huang, E-Wen; Hsu, Yu-Hsiang; Chuang, Wei-Tsung; Ko, Wen-Ching; Chang, Chung-Kai; Lee, Chih-Kung; Chang, Wen-Chi; Liao, Tzu-Kang; Thong, Hao Cheng
2015-12-16
In situ synchrotron X-ray diffraction is used to investigate a three-way piezo-phototronic soft material. This new system is composed of a semi-crystalline poly(vinylidene fluoride-co-trifluoroethylene) piezoelectric polymer and titanium oxide nanoparticles. Under light illumination, photon-induced piezoelectric responses are nearly two times higher at both the lattice-structure and the macroscopic level than under conditions without light illumination. A mechanistic model is proposed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Development of an errorable car-following driver model
NASA Astrophysics Data System (ADS)
Yang, H.-H.; Peng, H.
2010-06-01
An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.
Wave aberrations in rhesus monkeys with vision-induced ametropias
Ramamirtham, Ramkumar; Kee, Chea-su; Hung, Li-Fang; Qiao-Grider, Ying; Huang, Juan; Roorda, Austin; Smith, Earl L.
2007-01-01
The purpose of this study was to investigate the relationship between refractive errors and high-order aberrations in infant rhesus monkeys. Specifically, we compared the monochromatic wave aberrations measured with a Shack-Hartman wavefront sensor between normal monkeys and monkeys with vision-induced refractive errors. Shortly after birth, both normal monkeys and treated monkeys reared with optically induced defocus or form deprivation showed a decrease in the magnitude of high-order aberrations with age. However, the decrease in aberrations was typically smaller in the treated animals. Thus, at the end of the lens-rearing period, higher than normal amounts of aberrations were observed in treated eyes, both hyperopic and myopic eyes and treated eyes that developed astigmatism, but not spherical ametropias. The total RMS wavefront error increased with the degree of spherical refractive error, but was not correlated with the degree of astigmatism. Both myopic and hyperopic treated eyes showed elevated amounts of coma and trefoil and the degree of trefoil increased with the degree of spherical ametropia. Myopic eyes also exhibited a much higher prevalence of positive spherical aberration than normal or treated hyperopic eyes. Following the onset of unrestricted vision, the amount of high-order aberrations decreased in the treated monkeys that also recovered from the experimentally induced refractive errors. Our results demonstrate that high-order aberrations are influenced by visual experience in young primates and that the increase in high-order aberrations in our treated monkeys appears to be an optical byproduct of the vision-induced alterations in ocular growth that underlie changes in refractive error. The results from our study suggest that the higher amounts of wave aberrations observed in ametropic humans are likely to be a consequence, rather than a cause, of abnormal refractive development. PMID:17825347
Xue, Min; Pan, Shilong; Zhao, Yongjiu
2015-02-15
A novel optical vector network analyzer (OVNA) based on optical single-sideband (OSSB) modulation and balanced photodetection is proposed and experimentally demonstrated, which can eliminate the measurement error induced by the high-order sidebands in the OSSB signal. According to the analytical model of the conventional OSSB-based OVNA, if the optical carrier in the OSSB signal is fully suppressed, the measurement result is exactly the high-order-sideband-induced measurement error. By splitting the OSSB signal after the optical device-under-test (ODUT) into two paths, removing the optical carrier in one path, and then detecting the two signals in the two paths using a balanced photodetector (BPD), high-order-sideband-induced measurement error can be ideally eliminated. As a result, accurate responses of the ODUT can be achieved without complex post-signal processing. A proof-of-concept experiment is carried out. The magnitude and phase responses of a fiber Bragg grating (FBG) measured by the proposed OVNA with different modulation indices are superimposed, showing that the high-order-sideband-induced measurement error is effectively removed.
Synthesis and evaluation of thiophenyl derivatives as inhibitors of alkaline phosphatase.
Chang, Lei; Duy, Do Le; Mébarek, Saida; Popowycz, Florence; Pellet-Rostaing, Stéphane; Lemaire, Marc; Buchet, René
2011-04-15
Pathological calcifications induced by deposition of basic phosphate crystals or hydroxyapatite (HA) on soft tissues are a large family of diseases comprising of ankylosing spondylitis (AS), end-stage osteoarthritis (OA) and vascular calcification. High activity of tissue non-specific alkaline phosphatase (TNAP) is a hallmark of pathological calcifications induced by HA deposition. The use of TNAP inhibitor is a possible therapeutic option to address calcific diseases produced by HA deposition on soft tissues. We report the synthesis of a series of thiopheno-imidazo[2,1-b]thiazole derivatives which were evaluated as potential inhibitors of TNAP displaying a large range of IC(50) at pH 10.4 (from 42±13 μM to more than 800 μM). Copyright © 2011. Published by Elsevier Ltd.
Capillarity-induced folds fuel extreme shape changes in thin wicked membranes.
Grandgeorge, Paul; Krins, Natacha; Hourlier-Fargette, Aurélie; Laberty-Robert, Christel; Neukirch, Sébastien; Antkowiak, Arnaud
2018-04-20
Soft deformable materials are needed for applications such as stretchable electronics, smart textiles, or soft biomedical devices. However, the design of a durable, cost-effective, or biologically compatible version of such a material remains challenging. Living animal cells routinely cope with extreme deformations by unfolding preformed membrane reservoirs available in the form of microvilli or membrane folds. We synthetically mimicked this behavior by creating nanofibrous liquid-infused tissues that spontaneously form similar reservoirs through capillarity-induced folding. By understanding the physics of membrane buckling within the liquid film, we developed proof-of-concept conformable chemical surface treatments and stretchable basic electronic circuits. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
DNA sequence-directed shape change of photopatterned hydrogels via high-degree swelling
NASA Astrophysics Data System (ADS)
Cangialosi, Angelo; Yoon, ChangKyu; Liu, Jiayu; Huang, Qi; Guo, Jingkai; Nguyen, Thao D.; Gracias, David H.; Schulman, Rebecca
2017-09-01
Shape-changing hydrogels that can bend, twist, or actuate in response to external stimuli are critical to soft robots, programmable matter, and smart medicine. Shape change in hydrogels has been induced by global cues, including temperature, light, or pH. Here we demonstrate that specific DNA molecules can induce 100-fold volumetric hydrogel expansion by successive extension of cross-links. We photopattern up to centimeter-sized gels containing multiple domains that undergo different shape changes in response to different DNA sequences. Experiments and simulations suggest a simple design rule for controlled shape change. Because DNA molecules can be coupled to molecular sensors, amplifiers, and logic circuits, this strategy introduces the possibility of building soft devices that respond to diverse biochemical inputs and autonomously implement chemical control programs.
Augmenting endogenous repair of soft tissues with nanofibre scaffolds
Snelling, Sarah; Dakin, Stephanie; Carr, Andrew
2018-01-01
As our ability to engineer nanoscale materials has developed we can now influence endogenous cellular processes with increasing precision. Consequently, the use of biomaterials to induce and guide the repair and regeneration of tissues is a rapidly developing area. This review focuses on soft tissue engineering, it will discuss the types of biomaterial scaffolds available before exploring physical, chemical and biological modifications to synthetic scaffolds. We will consider how these properties, in combination, can provide a precise design process, with the potential to meet the requirements of the injured and diseased soft tissue niche. Finally, we frame our discussions within clinical trial design and the regulatory framework, the consideration of which is fundamental to the successful translation of new biomaterials. PMID:29695606
Soft X-Ray Irradiation of Silicates: Implications for Dust Evolution in Protoplanetary Disks
NASA Astrophysics Data System (ADS)
Ciaravella, A.; Cecchi-Pestellini, C.; Chen, Y.-J.; Muñoz Caro, G. M.; Huang, C.-H.; Jiménez-Escobar, A.; Venezia, A. M.
2016-09-01
The processing of energetic photons on bare silicate grains was simulated experimentally on silicate films submitted to soft X-rays of energies up to 1.25 keV. The silicate material was prepared by means of a microwave assisted sol-gel technique. Its chemical composition reflects the Mg2SiO4 stoichiometry with residual impurities due to the synthesis method. The experiments were performed using the spherical grating monochromator beamline at the National Synchrotron Radiation Research Center in Taiwan. We found that soft X-ray irradiation induces structural changes that can be interpreted as an amorphization of the processed silicate material. The present results may have relevant implications in the evolution of silicate materials in X-ray-irradiated protoplanetary disks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuten, A.; Sapir, D.; Cohen, Y.
1985-03-01
Seven cases of soft tissue sarcoma developing after primary or postoperative radiotherapy for breast carcinoma are reported. The sarcomas occurred within the irradiated volume, after a latent period of 4-26 years. These cases conform well to established criteria for the diagnosis of radiation-induced sarcoma. Chemotherapy, consisting of the four-drug combination CYVADIC (cyclophosphamide, vincristine, adriamycin, DTIC) was employed in six of the seven patients. Only two of them achieved partial remission, lasting only 2 and 3 months, respectively. The effectiveness of adriamycin-containing chemotherapy regimens in soft tissue sarcomas as well as the remote hazard of radiation-related sarcoma in primary or postoperativemore » breast irradiation are discussed.« less
Kermavnar, Tjaša; Power, Valerie; de Eyto, Adam; O'Sullivan, Leonard W
2018-02-01
In this article, we review the literature on quantitative sensory testing of deep somatic pain by means of computerized cuff pressure algometry (CPA) in search of pressure-related safety guidelines for wearable soft exoskeleton and robotics design. Most pressure-related safety thresholds to date are based on interface pressures and skin perfusion, although clinical research suggests the deep somatic tissues to be the most sensitive to excessive loading. With CPA, pain is induced in deeper layers of soft tissue at the limbs. The results indicate that circumferential compression leads to discomfort at ∼16-34 kPa, becomes painful at ∼20-27 kPa, and can become unbearable even below 40 kPa.
Salvage of mangled upper extremity using the Masquelet technique in a child: A case report.
Alassaf, Nabil; Alhoukail, Amro; Alsahli, Abdullah; Althubaiti, Ghazi
2017-01-01
To report our experience with the Masquelet concept in a pediatric upper extremity following an open injury to the elbow. A case report and literature review. An 11-year-old boy was transferred to our institution after a motor vehicle collision. There was a primary loss of the ulnohumeral articulation and the surrounding soft tissues as well as the ulnar nerve. Reconstruction used the Masquelet-induced membrane technique and a soft tissue flap. At the 30-month follow-up, the extremity was pain free and functional. This case highlights the value of the Masquelet technique in pediatric extremity injuries, where there is a loss of a major articular segment, as well as significant soft tissue compromise.
Photoacoustic imaging in both soft and hard biological tissue
NASA Astrophysics Data System (ADS)
Li, T.; Dewhurst, R. J.
2010-03-01
To date, most Photoacoustic (PA) imaging results have been from soft biotissues. In this study, a PA imaging system with a near-infrared pulsed laser source has been applied to obtain 2-D and 3-D images from both soft tissue and post-mortem dental samples. Imaging results showed that the PA technique has the potential to image human oral disease, such as early-stage teeth decay. For non-invasive photoacoustic imaging, the induced temperature and pressure rises within biotissues should not cause physical damage to the tissue. Several simulations based on the thermoelastic effect have been applied to predict initial temperature and pressure fields within a tooth sample. Predicted initial temperature and pressure rises are below corresponding safety limits.
Moran, Lauren V; Stoeckel, Luke E; Wang, Kristina; Caine, Carolyn E; Villafuerte, Rosemond; Calderon, Vanessa; Baker, Justin T; Ongur, Dost; Janes, Amy C; Evins, A Eden; Pizzagalli, Diego A
2018-03-01
Nicotine improves attention and processing speed in individuals with schizophrenia. Few studies have investigated the effects of nicotine on cognitive control. Prior functional magnetic resonance imaging (fMRI) research demonstrates blunted activation of dorsal anterior cingulate cortex (dACC) and rostral anterior cingulate cortex (rACC) in response to error and decreased post-error slowing in schizophrenia. Participants with schizophrenia (n = 13) and healthy controls (n = 12) participated in a randomized, placebo-controlled, crossover study of the effects of transdermal nicotine on cognitive control. For each drug condition, participants underwent fMRI while performing the stop signal task where participants attempt to inhibit prepotent responses to "go (motor activation)" signals when an occasional "stop (motor inhibition)" signal appears. Error processing was evaluated by comparing "stop error" trials (failed response inhibition) to "go" trials. Resting-state fMRI data were collected prior to the task. Participants with schizophrenia had increased nicotine-induced activation of right caudate in response to errors compared to controls (DRUG × GROUP effect: p corrected < 0.05). Both groups had significant nicotine-induced activation of dACC and rACC in response to errors. Using right caudate activation to errors as a seed for resting-state functional connectivity analysis, relative to controls, participants with schizophrenia had significantly decreased connectivity between the right caudate and dACC/bilateral dorsolateral prefrontal cortices. In sum, we replicated prior findings of decreased post-error slowing in schizophrenia and found that nicotine was associated with more adaptive (i.e., increased) post-error reaction time (RT). This proof-of-concept pilot study suggests a role for nicotinic agents in targeting cognitive control deficits in schizophrenia.
On the possibility of non-invasive multilayer temperature estimation using soft-computing methods.
Teixeira, C A; Pereira, W C A; Ruano, A E; Ruano, M Graça
2010-01-01
This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e.g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar-agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were considered, i.e. five 5-mm spaced spatial points and eight therapeutic intensities (I(SATA)): 0.3, 0.5, 0.7, 1.0, 1.3, 1.5, 1.7 and 2.0W/cm(2). Models were trained and selected to estimate temperature at only four intensities, then during the validation phase, the best-fitted models were analyzed in data collected at the eight intensities. This procedure leads to a more realistic evaluation of the generalisation level of the best-obtained structures. At the end of the identification phase, 82 (preferable) estimator models were achieved. The majority of them present an average maximum absolute error (MAE) inferior to 0.5 degrees C. The best-fitted estimator presents a MAE of only 0.4 degrees C for both the 40 operating conditions. This means that the gold-standard maximum error (0.5 degrees C) pointed for hyperthermia was fulfilled independently of the intensity and spatial position considered, showing the improved generalisation capacity of the identified estimator models. As the majority of the preferable estimator models, the best one presents 6 inputs and 11 neurons. In addition to the appropriate error performance, the estimator models present also a reduced computational complexity and then the possibility to be applied in real-time. A non-invasive temperature estimation model, based on soft-computing technique, was proposed for a three-layered phantom. The best-achieved estimator models presented an appropriate error performance regardless of the spatial point considered (inside or at the interface of the layers) and of the intensity applied. Other methodologies published so far, estimate temperature only in homogeneous media. The main drawback of the proposed methodology is the necessity of a-priory knowledge of the temperature behavior. Data used for training and optimisation should be representative, i.e., they should cover all possible physical situations of the estimation environment.
IMRT QA: Selecting gamma criteria based on error detection sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org
Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less
NASA Astrophysics Data System (ADS)
Xuan, Yue
Background. Soft materials such as polymers and soft tissues have diverse applications in bioengineering, medical care, and industry. Quantitative mechanical characterization of soft materials at multiscales is required to assure that appropriate mechanical properties are presented to support the normal material function. Indentation test has been widely used to characterize soft material. However, the measurement of in situ contact area is always difficult. Method of Approach. A transparent indenter method was introduced to characterize the nonlinear behaviors of soft materials under large deformation. This approach made the direct measurement of contact area and local deformation possible. A microscope was used to capture the contact area evolution as well as the surface deformation. Based on this transparent indenter method, a novel transparent indentation measurement systems has been built and multiple soft materials including polymers and pericardial tissue have been characterized. Seven different indenters have been used to study the strain distribution on the contact surface, inner layer and vertical layer. Finite element models have been built to simulate the hyperelastic and anisotropic material behaviors. Proper material constants were obtained by fitting the experimental results. Results.Homogeneous and anisotropic silicone rubber and porcine pericardial tissue have been examined. Contact area and local deformation were measured by real time imaging the contact interface. The experimental results were compared with the predictions from the Hertzian equations. The accurate measurement of contact area results in more reliable Young's modulus, which is critical for soft materials. For the fiber reinforced anisotropic silicone rubber, the projected contact area under a hemispherical indenter exhibited elliptical shape. The local surface deformation under indenter was mapped using digital image correlation program. Punch test has been applied to thin films of silicone rubber and porcine pericardial tissue and results were analyzed using the same method. Conclusions. The transparent indenter testing system can effectively reduce the material properties measurement error by directly measuring the contact radii. The contact shape can provide valuable information for the anisotropic property of the material. Local surface deformation including contact surface, inner layer and vertical plane can be accurately tracked and mapped to study the strain distribution. The potential usage of the transparent indenter measurement system to investigate biological and biomaterials was verified. The experimental data including the real-time contact area combined with the finite element simulation would be powerful tool to study mechanical properties of soft materials and their relation to microstructure, which has potential in pathologies study such as tissue repair and surgery plan. Key words: transparent indenter, large deformation, soft material, anisotropic.
Verifying Stability of Dynamic Soft-Computing Systems
NASA Technical Reports Server (NTRS)
Wen, Wu; Napolitano, Marcello; Callahan, John
1997-01-01
Soft computing is a general term for algorithms that learn from human knowledge and mimic human skills. Example of such algorithms are fuzzy inference systems and neural networks. Many applications, especially in control engineering, have demonstrated their appropriateness in building intelligent systems that are flexible and robust. Although recent research have shown that certain class of neuro-fuzzy controllers can be proven bounded and stable, they are implementation dependent and difficult to apply to the design and validation process. Many practitioners adopt the trial and error approach for system validation or resort to exhaustive testing using prototypes. In this paper, we describe our on-going research towards establishing necessary theoretic foundation as well as building practical tools for the verification and validation of soft-computing systems. A unified model for general neuro-fuzzy system is adopted. Classic non-linear system control theory and recent results of its applications to neuro-fuzzy systems are incorporated and applied to the unified model. It is hoped that general tools can be developed to help the designer to visualize and manipulate the regions of stability and boundedness, much the same way Bode plots and Root locus plots have helped conventional control design and validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at differentmore » developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner. • The nicotine-induced secondary motoneuron axonal pathfinding errors can occur independent of any muscle fiber alterations. • Nicotine exposure primarily affects dorsal projecting secondary motoneurons axons. • Nicotine-induced primary motoneuron axon pathfinding errors can influence secondary motoneuron axon morphology.« less
Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.
Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586
Why do adult dogs (Canis familiaris) commit the A-not-B search error?
Sümegi, Zsófia; Kis, Anna; Miklósi, Ádám; Topál, József
2014-02-01
It has been recently reported that adult domestic dogs, like human infants, tend to commit perseverative search errors; that is, they select the previously rewarded empty location in Piagetian A-not-B search task because of the experimenter's ostensive communicative cues. There is, however, an ongoing debate over whether these findings reveal that dogs can use the human ostensive referential communication as a source of information or the phenomenon can be accounted for by "more simple" explanations like insufficient attention and learning based on local enhancement. In 2 experiments the authors systematically manipulated the type of human cueing (communicative or noncommunicative) adjacent to the A hiding place during both the A and B trials. Results highlight 3 important aspects of the dogs' A-not-B error: (a) search errors are influenced to a certain extent by dogs' motivation to retrieve the toy object; (b) human communicative and noncommunicative signals have different error-inducing effects; and (3) communicative signals presented at the A hiding place during the B trials but not during the A trials play a crucial role in inducing the A-not-B error and it can be induced even without demonstrating repeated hiding events at location A. These findings further confirm the notion that perseverative search error, at least partially, reflects a "ready-to-obey" attitude in the dog rather than insufficient attention and/or working memory.
Energy Performance Measurement and Simulation Modeling of Tactical Soft-Wall Shelters
2015-07-01
was too low to measure was on the order of 5 hours. Because the research team did not have access to the site between 1700 and 0500 hours the...Basic for Applications ( VBA ). The objective function was the root mean square (RMS) errors between modeled and measured heating load and the modeled...References Phase Change Energy Solutions. (2013). BioPCM web page, http://phasechange.com/index.php/en/about/our-material. Accessed 16 September
Update on parts SEE suspectibility from heavy ions. [Single Event Effects
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Smith, L. S.; Schwartz, H. R.; Soli, G.; Watson, K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.
1991-01-01
JPL and the Aerospace Corporation have collected a fourth set of heavy ion single event effects (SEE) test data. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are displayed. All data are conveniently divided into two tables: one for MOS devices, and one for a shorter list of recently tested bipolar devices. In addition, a new table of data for latchup tests only (invariably CMOS processes) is given.
Temperature-based estimation of global solar radiation using soft computing methodologies
NASA Astrophysics Data System (ADS)
Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak
2016-07-01
Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.
FPGA implementation of advanced FEC schemes for intelligent aggregation networks
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.
FPGA implementation of high-performance QC-LDPC decoder for optical communications
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2015-01-01
Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Errors induced by catalytic effects in premixed flame temperature measurements
NASA Astrophysics Data System (ADS)
Pita, G. P. A.; Nina, M. N. R.
The evaluation of instantaneous temperature in a premixed flame using fine-wire Pt/Pt-(13 pct)Rh thermocouples was found to be subject to significant errors due to catalytic effects. An experimental study was undertaken to assess the influence of local fuel/air ratio, thermocouple wire diameter, and gas velocity on the thermocouple reading errors induced by the catalytic surface reactions. Measurements made with both coated and uncoated thermocouples showed that the catalytic effect imposes severe limitations on the accuracy of mean and fluctuating gas temperature in the radical-rich flame zone.
Analysis of soft-decision FEC on non-AWGN channels.
Cho, Junho; Xie, Chongjin; Winzer, Peter J
2012-03-26
Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.
Biomineralization of a Self-assembled, Soft-Matrix Precursor: Enamel
NASA Astrophysics Data System (ADS)
Snead, Malcolm L.
2015-04-01
Enamel is the bioceramic covering of teeth, a composite tissue composed of hierarchical organized hydroxyapatite crystallites fabricated by cells under physiologic pH and temperature. Enamel material properties resist wear and fracture to serve a lifetime of chewing. Understanding the cellular and molecular mechanisms for enamel formation may allow a biology-inspired approach to material fabrication based on self-assembling proteins that control form and function. A genetic understanding of human diseases exposes insight from nature's errors by exposing critical fabrication events that can be validated experimentally and duplicated in mice using genetic engineering to phenocopy the human disease so that it can be explored in detail. This approach led to an assessment of amelogenin protein self-assembly that, when altered, disrupts fabrication of the soft enamel protein matrix. A misassembled protein matrix precursor results in loss of cell-to-matrix contacts essential to fabrication and mineralization.
NASA Astrophysics Data System (ADS)
Kong, Gyuyeol; Choi, Sooyong
2017-09-01
An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
Disentangling AGN and Star Formation in Soft X-Rays
NASA Technical Reports Server (NTRS)
LaMassa, Stephanie M.; Heckman, T. M.; Ptak, A.
2012-01-01
We have explored the interplay of star formation and active galactic nucleus (AGN) activity in soft X-rays (0.5-2 keV) in two samples of Seyfert 2 galaxies (Sy2s). Using a combination of low-resolution CCD spectra from Chandra and XMM-Newton, we modeled the soft emission of 34 Sy2s using power-law and thermal models. For the 11 sources with high signal-to-noise Chandra imaging of the diffuse host galaxy emission, we estimate the luminosity due to star formation by removing the AGN, fitting the residual emission. The AGN and star formation contributions to the soft X-ray luminosity (i.e., L(sub x,AGN) and L(sub x,SF)) for the remaining 24 Sy2s were estimated from the power-law and thermal luminosities derived from spectral fitting. These luminosities were scaled based on a template derived from XSINGS analysis of normal star-forming galaxies. To account for errors in the luminosities derived from spectral fitting and the spread in the scaling factor, we estimated L(sub x,AGN) and L(sub x,SF))from Monte Carlo simulations. These simulated luminosities agree with L(sub x,AGN) and L(sub x,SF) derived from Chandra imaging analysis within a 3sigma confidence level. Using the infrared [Ne ii]12.8 micron and [O iv]26 micron lines as a proxy of star formation and AGN activity, respectively, we independently disentangle the contributions of these two processes to the total soft X-ray emission. This decomposition generally agrees with L(sub x,SF) and L(sub x,AGN) at the 3 sigma level. In the absence of resolvable nuclear emission, our decomposition method provides a reasonable estimate of emission due to star formation in galaxies hosting type 2 AGNs.
Pokhai, Gabriel G; Oliver, Michele L; Gordon, Karen D
2009-09-01
Determination of the biomechanical properties of soft tissues such as tendons and ligaments is dependent on the accurate measurement of their cross-sectional area (CSA). Measurement methods, which involve contact with the specimen, are problematic because soft tissues are easily deformed. Noncontact measurement methods are preferable in this regard, but may experience difficulty in dealing with the complex cross-sectional shapes and glistening surfaces seen in soft tissues. Additionally, existing CSA measurement systems are separated from the materials testing machine, resulting in the inability to measure CSA during testing. Furthermore, CSA measurements are usually made in a different orientation, and with a different preload, prior to testing. To overcome these problems, a noncontact laser reflectance system (LRS) was developed. Designed to fit in an Instron 8872 servohydraulic test machine, the system measures CSA by orbiting a laser transducer in a circular path around a soft tissue specimen held by tissue clamps. CSA measurements can be conducted before and during tensile testing. The system was validated using machined metallic specimens of various shapes and sizes, as well as different sizes of bovine tendons. The metallic specimens could be measured to within 4% accuracy, and the tendons to within an average error of 4.3%. Statistical analyses showed no significant differences between the measurements of the LRS and those of the casting method, an established measurement technique. The LRS was successfully used to measure the changing CSA of bovine tendons during uniaxial tensile testing. The LRS developed in this work represents a simple, quick, and accurate way of reconstructing complex cross-sectional profiles and calculating cross-sectional areas. In addition, the LRS represents the first system capable of automatically measuring changing CSA of soft tissues during tensile testing, facilitating the calculation of more accurate biomechanical properties.
NASA Astrophysics Data System (ADS)
Guo, Xiaohui; Huang, Ying; Cai, Xia; Liu, Caixia; Liu, Ping
2016-04-01
To achieve the wearable comfort of electronic skin (e-skin), a capacitive sensor printed on a flexible textile substrate with a carbon black (CB)/silicone rubber (SR) composite dielectric was demonstrated in this paper. Organo-silicone conductive silver adhesive serves as a flexible electrodes/shielding layer. The structure design, sensing mechanism and the influence of the conductive filler content and temperature variations on the sensor performance were investigated. The proposed device can effectively enhance the flexibility and comfort of wearing the device asthe sensing element has achieved a sensitivity of 0.02536%/KPa, a hysteresis error of 5.6%, and a dynamic response time of ~89 ms at the range of 0-700 KPa. The drift induced by temperature variations has been calibrated by presenting the temperature compensation model. The research on the time-space distribution of plantar pressure information and the experiment of the manipulator soft-grasping were implemented with the introduced device, and the experimental results indicate that the capacitive flexible textile tactile sensor has good stability and tactile perception capacity. This study provides a good candidate for wearable artificial skin.
Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.
Larsson, Elisabeth; Abrahamsson, Leif
2003-05-01
The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.
Magnetic-field sensing with quantum error detection under the effect of energy relaxation
NASA Astrophysics Data System (ADS)
Matsuzaki, Yuichiro; Benjamin, Simon
2017-03-01
A solid state spin is an attractive system with which to realize an ultrasensitive magnetic field sensor. A spin superposition state will acquire a phase induced by the target field, and we can estimate the field strength from this phase. Recent studies have aimed at improving sensitivity through the use of quantum error correction (QEC) to detect and correct any bit-flip errors that may occur during the sensing period. Here we investigate the performance of a two-qubit sensor employing QEC and under the effect of energy relaxation. Surprisingly, we find that the standard QEC technique to detect and recover from an error does not improve the sensitivity compared with the single-qubit sensors. This is a consequence of the fact that the energy relaxation induces both a phase-flip and a bit-flip noise where the former noise cannot be distinguished from the relative phase induced from the target fields. However, we have found that we can improve the sensitivity if we adopt postselection to discard the state when error is detected. Even when quantum error detection is moderately noisy, and allowing for the cost of the postselection technique, we find that this two-qubit system shows an advantage in sensing over a single qubit in the same conditions.
Tan, Qiu-Wen; Zhang, Yi; Luo, Jing-Cong; Zhang, Di; Xiong, Bin-Jun; Yang, Ji-Qiao; Xie, Hui-Qi; Lv, Qing
2017-06-01
Decellularized extracellular matrix (ECM) scaffolds from human adipose tissue, characterized by impressive adipogenic induction ability, are promising for soft tissue augmentation. However, scaffolds from autologous human adipose tissue are limited by the availability of tissue resources and the time necessary for scaffold fabrication. The objective of the current study was to investigate the adipogenic properties of hydrogels of decellularized porcine adipose tissue (HDPA). HDPA induced the adipogenic differentiation of human adipose-derived stem cells (ADSCs) in vitro, with significantly increased expression of adipogenic genes. Subcutaneous injection of HDPA in immunocompetent mice induced host-derived adipogenesis without cell seeding, and adipogenesis was significantly enhanced with ADSCs seeding. The newly formed adipocytes were frequently located on the basal side in the non-seeding group, but this trend was not observed in the ADSCs seeding group. Our results indicated that, similar to human adipose tissue, the ECM scaffold derived from porcine adipose tissue could provide an adipogenic microenvironment for adipose tissue regeneration and is a promising biomaterial for soft tissue augmentation. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 105A: 1756-1764, 2017. © 2017 Wiley Periodicals, Inc.
Flexocoupling-induced soft acoustic modes and the spatially modulated phases in ferroelectrics
NASA Astrophysics Data System (ADS)
Morozovska, Anna N.; Glinchuk, Maya D.; Eliseev, Eugene A.; Vysochanskii, Yulian M.
2017-09-01
Using the Landau-Ginzburg-Devonshire theory and one component approximation, we examined the conditions of the soft acoustic phonon mode (A-mode) appearance in a ferroelectric (FE) depending on the magnitude of the flexoelectric coefficient f and temperature T . If the flexocoefficient f is equal to the temperature-dependent critical value fcr(T ) at some temperature T =TIC , the A-mode frequency tends to zero at wave vector k =k0cr , and the spontaneous polarization becomes spatially modulated in the temperature range T
Biomimetic 3D tissue printing for soft tissue regeneration.
Pati, Falguni; Ha, Dong-Heon; Jang, Jinah; Han, Hyun Ho; Rhie, Jong-Won; Cho, Dong-Woo
2015-09-01
Engineered adipose tissue constructs that are capable of reconstructing soft tissue with adequate volume would be worthwhile in plastic and reconstructive surgery. Tissue printing offers the possibility of fabricating anatomically relevant tissue constructs by delivering suitable matrix materials and living cells. Here, we devise a biomimetic approach for printing adipose tissue constructs employing decellularized adipose tissue (DAT) matrix bioink encapsulating human adipose tissue-derived mesenchymal stem cells (hASCs). We designed and printed precisely-defined and flexible dome-shaped structures with engineered porosity using DAT bioink that facilitated high cell viability over 2 weeks and induced expression of standard adipogenic genes without any supplemented adipogenic factors. The printed DAT constructs expressed adipogenic genes more intensely than did non-printed DAT gel. To evaluate the efficacy of our printed tissue constructs for adipose tissue regeneration, we implanted them subcutaneously in mice. The constructs did not induce chronic inflammation or cytotoxicity postimplantation, but supported positive tissue infiltration, constructive tissue remodeling, and adipose tissue formation. This study demonstrates that direct printing of spatially on-demand customized tissue analogs is a promising approach to soft tissue regeneration. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Vincent W.C., E-mail: htvinwu@polyu.edu.hk; Tse, Teddy K.H.; Ho, Cola L.M.
2013-07-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each casemore » by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.« less
NASA Astrophysics Data System (ADS)
Wood, R.; Monson, J.; Coughlin, T.
1999-03-01
The presence of a soft magnetic layer adjacent to a magnetic recording medium reduces the demagnetization of both perpendicular and longitudinal recording media. However, for perpendicular media, there is no reduction in the worst case, DC, demagnetizing field and no lessening of the decay. For longitudinal media, the highest demagnetizing fields occur at high densities. The soft layer or keeper can reduce these fields significantly and slow the initial decay. The soft underlayer also induces a small anisotropy field that assists the thermal stability of a perpendicular medium. A similar layer with a longitudinal medium, however, causes a small reduction in thermal stability, but only at low levels of demagnetizing field. For longitudinal recording media the overall effect of the keeper on thermal stability is quite complicated: the initial decay may be delayed significantly (a factor of ten in time) but the final decay to zero may still proceed more rapidly.
Development of a soft untethered robot using artificial muscle actuators
NASA Astrophysics Data System (ADS)
Cao, Jiawei; Qin, Lei; Lee, Heow Pueh; Zhu, Jian
2017-04-01
Soft robots have attracted much interest recently, due to their potential capability to work effectively in unstructured environment. Soft actuators are key components in soft robots. Dielectric elastomer actuators are one class of soft actuators, which can deform in response to voltage. Dielectric elastomer actuators exhibit interesting attributes including large voltage-induced deformation and high energy density. These attributes make dielectric elastomer actuators capable of functioning as artificial muscles for soft robots. It is significant to develop untethered robots, since connecting the cables to external power sources greatly limits the robots' functionalities, especially autonomous movements. In this paper we develop a soft untethered robot based on dielectric elastomer actuators. This robot mainly consists of a deformable robotic body and two paper-based feet. The robotic body is essentially a dielectric elastomer actuator, which can expand or shrink at voltage on or off. In addition, the two feet can achieve adhesion or detachment based on the mechanism of electroadhesion. In general, the entire robotic system can be controlled by electricity or voltage. By optimizing the mechanical design of the robot (the size and weight of electric circuits), we put all these components (such as batteries, voltage amplifiers, control circuits, etc.) onto the robotic feet, and the robot is capable of realizing autonomous movements. Experiments are conducted to study the robot's locomotion. Finite element method is employed to interpret the deformation of dielectric elastomer actuators, and the simulations are qualitatively consistent with the experimental observations.
Karuppanan, Udayakumar; Unni, Sujatha Narayanan; Angarai, Ganesan R.
2017-01-01
Abstract. Assessment of mechanical properties of soft matter is a challenging task in a purely noninvasive and noncontact environment. As tissue mechanical properties play a vital role in determining tissue health status, such noninvasive methods offer great potential in framing large-scale medical screening strategies. The digital speckle pattern interferometry (DSPI)–based image capture and analysis system described here is capable of extracting the deformation information from a single acquired fringe pattern. Such a method of analysis would be required in the case of the highly dynamic nature of speckle patterns derived from soft tissues while applying mechanical compression. Soft phantoms mimicking breast tissue optical and mechanical properties were fabricated and tested in the DSPI out of plane configuration set up. Hilbert transform (HT)-based image analysis algorithm was developed to extract the phase and corresponding deformation of the sample from a single acquired fringe pattern. The experimental fringe contours were found to correlate with numerically simulated deformation patterns of the sample using Abaqus finite element analysis software. The extracted deformation from the experimental fringe pattern using the HT-based algorithm is compared with the deformation value obtained using numerical simulation under similar conditions of loading and the results are found to correlate with an average %error of 10. The proposed method is applied on breast phantoms fabricated with included subsurface anomaly mimicking cancerous tissue and the results are analyzed. PMID:28180134
Technical Note: Deep learning based MRAC using rapid ultra-short echo time imaging.
Jang, Hyungseok; Liu, Fang; Zhao, Gengyan; Bradshaw, Tyler; McMillan, Alan B
2018-05-15
In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 sec). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on 8 human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76±0.03, 0.96±0.006, and 0.88±0.01. In PET quantification, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantification with accurate and rapid pseudo CT generation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
The development and role of megavoltage cone beam computerized tomography in radiation oncology
NASA Astrophysics Data System (ADS)
Morin, Olivier
External beam radiation therapy has now the ability to deliver doses that conform tightly to a tumor volume. The steep dose gradients planned in these treatments make it increasingly important to reproduce the patient position and anatomy at each treatment fraction. For this reason, considerable research now focuses on in-room three-dimensional imaging. This thesis describes the first clinical megavoltage cone beam computed tomography (MVCBCT) system, which utilizes a conventional linear accelerator equipped with an amorphous silicon flat panel detector. The document covers the system development and investigation of its clinical applications over the last 4-5 years. The physical performance of the system was evaluated and optimized for soft-tissue contrast resolution leading to recommendations of imaging protocols to use for specific clinical applications and body sites. MVCBCT images can resolve differences of 5% in electron density for a mean dose of 9 cGy. Hence, the image quality of this system is sufficient to differentiate some soft-tissue structures. The absolute positioning accuracy with MVCBCT is better than 1 mm. The accuracy of isodose lines calculated using MVCBCT images of head and neck patients is within 3% and 3 mm. The system shows excellent stability in image quality, CT# calibration, radiation exposure and absolute positioning over a period of 8 months. A procedure for MVCBCT quality assurance was developed. In our clinic, MVCBCT has been used to detect non rigid spinal cord distortions, to position a patient with a paraspinous tumor close to metallic hardware, to position prostate cancer patients using gold markers or soft-tissue landmarks, to monitor head and neck anatomical changes and their dosimetric consequences, and to complement the convention CT for treatment planning in presence of metallic implants. MVCBCT imaging is changing the clinical practice of our department by increasingly revealing patient-specific errors. New verification protocols are being developed to minimize those errors thus moving the practice of radiation therapy one step closer to personalized medicine.
Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z
2018-05-01
Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-09-21
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units ([Formula: see text]) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into [Formula: see text] was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of [Formula: see text] corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
Thrust at N{sup 3}LL with power corrections and a precision global fit for {alpha}{sub s}(m{sub Z})
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbate, Riccardo; Stewart, Iain W.; Fickinger, Michael
2011-04-01
We give a factorization formula for the e{sup +}e{sup -} thrust distribution d{sigma}/d{tau} with {tau}=1-T based on the soft-collinear effective theory. The result is applicable for all {tau}, i.e. in the peak, tail, and far-tail regions. The formula includes O({alpha}{sub s}{sup 3}) fixed-order QCD results, resummation of singular partonic {alpha}{sub s}{sup j}ln{sup k}({tau})/{tau} terms with N{sup 3}LL accuracy, hadronization effects from fitting a universal nonperturbative soft function defined with field theory, bottom quark mass effects, QED corrections, and the dominant top mass dependent terms from the axial anomaly. We do not rely on Monte Carlo generators to determine nonperturbative effectsmore » since they are not compatible with higher order perturbative analyses. Instead our treatment is based on fitting nonperturbative matrix elements in field theory, which are moments {Omega}{sub i} of a nonperturbative soft function. We present a global analysis of all available thrust data measured at center-of-mass energies Q=35-207 GeV in the tail region, where a two-parameter fit to {alpha}{sub s}(m{sub Z}) and the first moment {Omega}{sub 1} suffices. We use a short-distance scheme to define {Omega}{sub 1}, called the R-gap scheme, thus ensuring that the perturbative d{sigma}/d{tau} does not suffer from an O({Lambda}{sub QCD}) renormalon ambiguity. We find {alpha}{sub s}(m{sub Z})=0.1135{+-}(0.0002){sub expt{+-}}(0.0005){sub hadr{+-}}(0.0009){sub pert}, with {chi}{sup 2}/dof=0.91, where the displayed 1-sigma errors are the total experimental error, the hadronization uncertainty, and the perturbative theory uncertainty, respectively. The hadronization uncertainty in {alpha}{sub s} is significantly decreased compared to earlier analyses by our two-parameter fit, which determines {Omega}{sub 1}=0.323 GeV with 16% uncertainty.« less
NASA Astrophysics Data System (ADS)
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-10-01
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
Jacobs, H; Bast, A; Peters, G J; van der Vijgh, W J F; Haenen, G R M M
2011-01-01
Background: Despite therapeutic advances, the prognosis of patients with metastatic soft tissue sarcoma (STS) remains extremely poor. The results of a recent clinical phase II study, evaluating the protective effects of the semisynthetic flavonoid 7-mono-O-(β-hydroxyethyl)-rutoside (monoHER) on doxorubicin-induced cardiotoxicity, suggest that monoHER enhances the antitumour activity of doxorubicin in STSs. Methods: To molecularly explain this unexpected finding, we investigated the effect of monoHER on the cytotoxicity of doxorubicin, and the potential involvement of glutathione (GSH) depletion and nuclear factor-κB (NF-κB) inactivation in the chemosensitising effect of monoHER. Results: MonoHER potentiated the antitumour activity of doxorubicin in the human liposarcoma cell line WLS-160. Moreover, the combination of monoHER with doxorubicin induced more apoptosis in WLS-160 cells compared with doxorubicin alone. MonoHER did not reduce intracellular GSH levels. On the other hand, monoHER pretreatment significantly reduced doxorubicin-induced NF-κB activation. Conclusion: These results suggest that reduction of doxorubicin-induced NF-κB activation by monoHER, which sensitises cancer cells to apoptosis, is involved in the chemosensitising effect of monoHER in human liposarcoma cells. PMID:21245867
SOFT X-RAY IRRADIATION OF SILICATES: IMPLICATIONS FOR DUST EVOLUTION IN PROTOPLANETARY DISKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciaravella, A.; Cecchi-Pestellini, C.; Jiménez-Escobar, A.
2016-09-01
The processing of energetic photons on bare silicate grains was simulated experimentally on silicate films submitted to soft X-rays of energies up to 1.25 keV. The silicate material was prepared by means of a microwave assisted sol–gel technique. Its chemical composition reflects the Mg{sub 2}SiO{sub 4} stoichiometry with residual impurities due to the synthesis method. The experiments were performed using the spherical grating monochromator beamline at the National Synchrotron Radiation Research Center in Taiwan. We found that soft X-ray irradiation induces structural changes that can be interpreted as an amorphization of the processed silicate material. The present results may havemore » relevant implications in the evolution of silicate materials in X-ray-irradiated protoplanetary disks.« less
Salvage of mangled upper extremity using the Masquelet technique in a child: A case report
Alassaf, Nabil; Alhoukail, Amro; Alsahli, Abdullah; Althubaiti, Ghazi
2017-01-01
Aim: To report our experience with the Masquelet concept in a pediatric upper extremity following an open injury to the elbow. Methods: A case report and literature review. Results: An 11-year-old boy was transferred to our institution after a motor vehicle collision. There was a primary loss of the ulnohumeral articulation and the surrounding soft tissues as well as the ulnar nerve. Reconstruction used the Masquelet-induced membrane technique and a soft tissue flap. At the 30-month follow-up, the extremity was pain free and functional. Conclusion: This case highlights the value of the Masquelet technique in pediatric extremity injuries, where there is a loss of a major articular segment, as well as significant soft tissue compromise. PMID:29201370
Modeling Inborn Errors of Hepatic Metabolism Using Induced Pluripotent Stem Cells.
Pournasr, Behshad; Duncan, Stephen A
2017-11-01
Inborn errors of hepatic metabolism are because of deficiencies commonly within a single enzyme as a consequence of heritable mutations in the genome. Individually such diseases are rare, but collectively they are common. Advances in genome-wide association studies and DNA sequencing have helped researchers identify the underlying genetic basis of such diseases. Unfortunately, cellular and animal models that accurately recapitulate these inborn errors of hepatic metabolism in the laboratory have been lacking. Recently, investigators have exploited molecular techniques to generate induced pluripotent stem cells from patients' somatic cells. Induced pluripotent stem cells can differentiate into a wide variety of cell types, including hepatocytes, thereby offering an innovative approach to unravel the mechanisms underlying inborn errors of hepatic metabolism. Moreover, such cell models could potentially provide a platform for the discovery of therapeutics. In this mini-review, we present a brief overview of the state-of-the-art in using pluripotent stem cells for such studies. © 2017 American Heart Association, Inc.
2015-01-01
Durotaxis, biased cell movement up a stiffness gradient on culture substrates, is one of the useful taxis behaviors for manipulating cell migration on engineered biomaterial surfaces. In this study, long-term durotaxis was investigated on gelatinous substrates containing a soft band of 20, 50, and 150 μm in width fabricated using photolithographic elasticity patterning; sharp elasticity boundaries with a gradient strength of 300 kPa/50 μm were achieved. Time-dependent migratory behaviors of 3T3 fibroblast cells were observed during a time period of 3 days. During the first day, most of the cells were strongly repelled by the soft band independent of bandwidth, exhibiting the typical durotaxis behavior. However, the repellency by the soft band diminished, and more cells crossed the soft band or exhibited other mixed migratory behaviors during the course of the observation. It was found that durotaxis strength is weakened on the substrate with the narrowest soft band and that adherent affinity-induced entrapment becomes apparent on the widest soft band with time. Factors, such as changes in surface topography, elasticity, and/or chemistry, likely contributing to the apparent diminishing durotaxis during the extended culture were examined. Immunofluorescence analysis indicated preferential collagen deposition onto the soft band, which is derived from secretion by fibroblast cells, resulting in the increasing contribution of haptotaxis toward the soft band over time. The deposited collagen did not affect surface topography or surface elasticity but did change surface chemistry, especially on the soft band. The observed time-dependent durotaxis behaviors are the result of the mixed mechanical and chemical cues. In the studies and applications of cell migratory behavior under a controlled stimulus, it is important to thoroughly examine other (hidden) compounding stimuli in order to be able to accurately interpret data and to design suitable biomaterials to manipulate cell migration. PMID:24851722
Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors
NASA Astrophysics Data System (ADS)
Jacobson, N. Tobias; Witzel, Wayne M.; Nielsen, Erik; Carroll, Malcolm S.
2013-03-01
Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial increase in gate fidelities. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Gating of neural error signals during motor learning
Kimpo, Rhea R; Rinaldi, Jacob M; Kim, Christina K; Payne, Hannah L; Raymond, Jennifer L
2014-01-01
Cerebellar climbing fiber activity encodes performance errors during many motor learning tasks, but the role of these error signals in learning has been controversial. We compared two motor learning paradigms that elicited equally robust putative error signals in the same climbing fibers: learned increases and decreases in the gain of the vestibulo-ocular reflex (VOR). During VOR-increase training, climbing fiber activity on one trial predicted changes in cerebellar output on the next trial, and optogenetic activation of climbing fibers to mimic their encoding of performance errors was sufficient to implant a motor memory. In contrast, during VOR-decrease training, there was no trial-by-trial correlation between climbing fiber activity and changes in cerebellar output, and climbing fiber activation did not induce VOR-decrease learning. Our data suggest that the ability of climbing fibers to induce plasticity can be dynamically gated in vivo, even under conditions where climbing fibers are robustly activated by performance errors. DOI: http://dx.doi.org/10.7554/eLife.02076.001 PMID:24755290
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Oncogenes in Hematopoietic and Hepatic Fish Neoplasms
1990-09-01
8 + Histolo,-ically normal 0 0.04 4/12 - Cholangiocarcinoma 0.26 4.9 8/8 ... Results Histologically abnormal. 0.39 0.36 8/55 - nonneoplasticd Northern...transfection of medaka cholangiocarcinoma DNA, growth in soft DNA source DCF5 b DCF5 + DEX agar’ two plates which were independently transfected with...induced rhabdomyosarcoma 0 0 - + 8/16 DEN-induced focal biliary hyperplasia 0.77 0.53 - - 10/16 DEN-induced cholangiocarcinoma 5.0 5.0 + + 47/47 Tumors
Regulation of Pectate Lyase Synthesis in Pseudomonas fluorescens and Erwinia carotovora
Zucker, Milton; Hankin, Lester
1970-01-01
Inducible synthesis of extracellular pectate lyase occurs in Erwinia carotovora, a bacterial soft-rot pathogen of plants, and, to a lesser extent, in a nonpathogenic isolate of Pseudomonas fluorescens. A combination of pectin and a heat-labile factor in fresh potato tissue or acetone powders of the tissue provided the best carbon source for induction. Yields of inducible pectate lyase were much greater than those usually reported. The pathogen, but not the saprophyte, produced a small amount of constitutive enzyme when grown on glucose. The relatively low level or absence of constitutive synthesis in these bacteria did not result from catabolite repression. Attempts were made to relieve any existing catabolite repression by restricting growth through slow feeding of glucose or by growing the organisms on glycerol. These conditions did not significantly alter the differential rate of lyase synthesis compared with changes observed in the presence of inducers. Previous growth history did not affect induction in the pathogen. However, P. fluorescens previously cultured on glucose required 10 to 20 generations of growth on inducing medium before appreciable lyase synthesis occurred. Differences between the pathogen and nonpathogen suggest that regulation of pectate lyase synthesis is related to pathogenicity of soft-rot bacteria. PMID:5473883
NASA Astrophysics Data System (ADS)
Gavilan, Lisseth; Remusat, Laurent; Roskosz, Mathieu; Popescu, Horia; Jaouen, Nicolas; Sandt, Christophe; Jäger, Cornelia; Henning, Thomas; Simionovici, Alexandre; Lemaire, Jean Louis; Mangin, Denis; Carrasco, Nathalie
2017-05-01
The deuterium enrichment of organics in the interstellar medium, protoplanetary disks, and meteorites has been proposed to be the result of ionizing radiation. The goal of this study is to simulate and quantify the effects of soft X-rays (0.1-2 keV), an important component of stellar radiation fields illuminating protoplanetary disks, on the refractory organics present in the disks. We prepared tholins, nitrogen-rich organic analogs to solids found in several astrophysical environments, e.g., Titan’s atmosphere, cometary surfaces, and protoplanetary disks, via plasma deposition. Controlled irradiation experiments with soft X-rays at 0.5 and 1.3 keV were performed at the SEXTANTS beamline of the SOLEIL synchrotron, and were immediately followed by ex-situ infrared, Raman, and isotopic diagnostics. Infrared spectroscopy revealed the preferential loss of singly bonded groups (N-H, C-H, and R-N≡C) and the formation of sp3 carbon defects with signatures at ˜1250-1300 cm-1. Raman analysis revealed that, while the length of polyaromatic units is only slightly modified, the introduction of defects leads to structural amorphization. Finally, tholins were measured via secondary ion mass spectrometry to quantify the D, H, and C elemental abundances in the irradiated versus non-irradiated areas. Isotopic analysis revealed that significant D-enrichment is induced by X-ray irradiation. Our results are compared to previous experimental studies involving the thermal degradation and electron irradiation of organics. The penetration depth of soft X-rays in μm-sized tholins leads to volume rather than surface modifications: lower-energy X-rays (0.5 keV) induce a larger D-enrichment than 1.3 keV X-rays, reaching a plateau for doses larger than 5 × 1027 eV cm-3. Synchrotron fluences fall within the expected soft X-ray fluences in protoplanetary disks, and thus provide evidence of a new non-thermal pathway to deuterium fractionation of organic matter.
DiGirolamo, Gregory J; Smelson, David; Guevremont, Nathan
2015-08-01
Cue-induced craving is a clinically important aspect of cocaine addiction influencing ongoing use and sobriety. However, little is known about the relationship between cue-induced craving and cognitive control toward cocaine cues. While studies suggest that cocaine users have an attentional bias toward cocaine cues, the present study extends this research by testing if cocaine use disorder patients (CDPs) can control their eye movements toward cocaine cues and whether their response varied by cue-induced craving intensity. Thirty CDPs underwent a cue exposure procedure to dichotomize them into high and low craving groups followed by a modified antisaccade task in which subjects were asked to control their eye movements toward either a cocaine or neutral drug cue by looking away from the suddenly presented cue. The relationship between breakdowns in cognitive control (as measured by eye errors) and cue-induced craving (changes in self-reported craving following cocaine cue exposure) was investigated. CDPs overall made significantly more errors toward cocaine cues compared to neutral cues, with higher cravers making significantly more errors than lower cravers even though they did not differ significantly in addiction severity, impulsivity, anxiety, or depression levels. Cue-induced craving was the only specific and significant predictor of subsequent errors toward cocaine cues. Cue-induced craving directly and specifically relates to breakdowns of cognitive control toward cocaine cues in CDPs, with higher cravers being more susceptible. Hence, it may be useful identifying high cravers and target treatment toward curbing craving to decrease the likelihood of a subsequent breakdown in control. Copyright © 2015 Elsevier Ltd. All rights reserved.
Swift J1822.3-1606: Optical spectroscopy of the counterpart candidates from the 10.4m GTC
NASA Astrophysics Data System (ADS)
de Ugarte Postigo, A.; Munoz-Darias, T.
2011-07-01
We have performed optical spectroscopy of the two objects (S1 and S2; ATEL #3496, #3502) present within the Swift/XRT error circle of the Soft Gamma-ray Repeater (SGR) candidate, Swift J1822.3-1606 (ATEL #3488, #3489, #3490, #3491, #3493, #3501, #3503). Observations were performed on July 20, 2011 using the OSIRIS spectrograph at the 10.4m Gran Telescopio de Canarias (GTC) telescope in La Palma, Spain.
Investigation of the Use of Erasures in a Concatenated Coding Scheme
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Marriott, Philip J.
1997-01-01
A new method for declaring erasures in a concatenated coding scheme is investigated. This method is used with the rate 1/2 K = 7 convolutional code and the (255, 223) Reed Solomon code. Errors and erasures Reed Solomon decoding is used. The erasure method proposed uses a soft output Viterbi algorithm and information provided by decoded Reed Solomon codewords in a deinterleaving frame. The results show that a gain of 0.3 dB is possible using a minimum amount of decoding trials.
[Latency problems with smothering using soft cover].
Wirth, Ingo; Strauch, Hansjürg; Schmeling, Andreas
2007-01-01
Smothering by covering the respiratory orifices with soft material is one of the rarely established forms of mechanically induced death by asphyxia. An important reason of latency is that this kind of homicide leaves almost no traces. The two described cases from the autopsy material of the Institute of Legal Medicine in Berlin (CCM) show the limits of medico-legal interpretation and the resulting special responsibility of the investigator. In the first case the defendant denied the offence and was acquitted of the charge, while in the second case the self-confessed offender was convicted.
Krempfielins Q and R, Two New Eunicellin-Based Diterpenoids from the Soft Coral Cladiella krempfi
Tai, Chi-Jen; Chokkalingam, Uvarani; Cheng, Yang; Shih, Shou-Ping; Lu, Mei-Chin; Su, Jui-Hsin; Hwang, Tsong-Long; Sheu, Jyh-Horng
2014-01-01
Two new eunicellin-based diterpenoids, krempfielins Q and R (1 and 2), and one known compound cladieunicellin K (3) have been isolated from a Formosan soft coral Cladiella krempfi. The structures of these two new metabolites were elucidated by extensive spectroscopic analysis. Anti-inflammatory activity of new metabolites to inhibit the superoxide anion generation and elastase release in N-formyl-methionyl-leucyl phenylalanine/cytochalasin B (FMLP/CB)-induced human neutrophil cells and cytotoxicity of both new compounds toward five cancer cell lines were reported. PMID:25437917
Krempfielins Q and R, two new eunicellin-based diterpenoids from the soft coral Cladiella krempfi.
Tai, Chi-Jen; Chokkalingam, Uvarani; Cheng, Yang; Shih, Shou-Ping; Lu, Mei-Chin; Su, Jui-Hsin; Hwang, Tsong-Long; Sheu, Jyh-Horng
2014-11-27
Two new eunicellin-based diterpenoids, krempfielins Q and R (1 and 2), and one known compound cladieunicellin K (3) have been isolated from a Formosan soft coral Cladiella krempfi. The structures of these two new metabolites were elucidated by extensive spectroscopic analysis. Anti-inflammatory activity of new metabolites to inhibit the superoxide anion generation and elastase release in N-formyl-methionyl-leucyl phenylalanine/cytochalasin B (FMLP/CB)-induced human neutrophil cells and cytotoxicity of both new compounds toward five cancer cell lines were reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, B.; Sutherland, B.; Bennett, P. V.
We tested the ability of melatonin (N-acetyl-5 methoxytryptamine), a highly effective radical scavenger and human hormone, to protect DNA in solution and in human cells against induction of complex DNA clusters and biological damage induced by low or high linear energy transfer radiation (100 kVp X-rays, 970 MeV/nucleon Fe ions). Plasmid DNA in solution was treated with increasing concentrations of melatonin (0.0-3.5 mM) and were irradiated with X-rays. Human cells (28SC monocytes) were also irradiated with X-rays and Fe ions with and without 2 mM melatonin. Agarose plugs containing genomic DNA were subjected to Contour Clamped Homogeneous Electrophoretic Field (CHEF)more » followed by imaging and clustered DNA damages were measured by using Number Average length analysis. Transformation experiments on human primary fibroblast cells using soft agar colony assay were carried out which were irradiated with Fe ions with or without 2 mM melatonin. In plasmid DNA in solution, melatonin reduced the induction of single- and double-strand breaks. Pretreatment of human 28SC cells for 24 h before irradiation with 2 mM melatonin reduced the level of X-ray induced double-strand breaks by {approx}50%, of abasic clustered damages about 40%, and of Fe ion-induced double-strand breaks (41% reduction) and abasic clusters (34% reduction). It decreased transformation to soft agar growth of human primary cells by a factor of 10, but reduced killing by Fe ions only by 20-40%. Melatonin's effective reduction of radiation-induced critical DNA damages, cell killing, and striking decrease of transformation suggest that it is an excellent candidate as a countermeasure against radiation exposure, including radiation exposure to astronaut crews in space travel.« less
Avoidance of APOBEC3B-induced mutation by error-free lesion bypass
Hoopes, James I.; Hughes, Amber L.; Hobson, Lauren A.; Cortez, Luis M.; Brown, Alexander J.
2017-01-01
Abstract APOBEC cytidine deaminases mutate cancer genomes by converting cytidines into uridines within ssDNA during replication. Although uracil DNA glycosylases limit APOBEC-induced mutation, it is unknown if subsequent base excision repair (BER) steps function on replication-associated ssDNA. Hence, we measured APOBEC3B-induced CAN1 mutation frequencies in yeast deficient in BER endonucleases or DNA damage tolerance proteins. Strains lacking Apn1, Apn2, Ntg1, Ntg2 or Rev3 displayed wild-type frequencies of APOBEC3B-induced canavanine resistance (CanR). However, strains without error-free lesion bypass proteins Ubc13, Mms2 and Mph1 displayed respective 4.9-, 2.8- and 7.8-fold higher frequency of APOBEC3B-induced CanR. These results indicate that mutations resulting from APOBEC activity are avoided by deoxyuridine conversion to abasic sites ahead of nascent lagging strand DNA synthesis and subsequent bypass by error-free template switching. We found this mechanism also functions during telomere re-synthesis, but with a diminished requirement for Ubc13. Interestingly, reduction of G to C substitutions in Ubc13-deficient strains uncovered a previously unknown role of Ubc13 in controlling the activity of the translesion synthesis polymerase, Rev1. Our results highlight a novel mechanism for error-free bypass of deoxyuridines generated within ssDNA and suggest that the APOBEC mutation signature observed in cancer genomes may under-represent the genomic damage these enzymes induce. PMID:28334887
Errors in Aviation Decision Making: Bad Decisions or Bad Luck?
NASA Technical Reports Server (NTRS)
Orasanu, Judith; Martin, Lynne; Davison, Jeannie; Null, Cynthia H. (Technical Monitor)
1998-01-01
Despite efforts to design systems and procedures to support 'correct' and safe operations in aviation, errors in human judgment still occur and contribute to accidents. In this paper we examine how an NDM (naturalistic decision making) approach might help us to understand the role of decision processes in negative outcomes. Our strategy was to examine a collection of identified decision errors through the lens of an aviation decision process model and to search for common patterns. The second, and more difficult, task was to determine what might account for those patterns. The corpus we analyzed consisted of tactical decision errors identified by the NTSB (National Transportation Safety Board) from a set of accidents in which crew behavior contributed to the accident. A common pattern emerged: about three quarters of the errors represented plan-continuation errors, that is, a decision to continue with the original plan despite cues that suggested changing the course of action. Features in the context that might contribute to these errors were identified: (a) ambiguous dynamic conditions and (b) organizational and socially-induced goal conflicts. We hypothesize that 'errors' are mediated by underestimation of risk and failure to analyze the potential consequences of continuing with the initial plan. Stressors may further contribute to these effects. Suggestions for improving performance in these error-inducing contexts are discussed.
Michaud, Langis; Brazeau, Daniel; Corbeil, Marie-Eve; Forcier, Pierre; Bernard, Pierre-Jean
2013-12-01
This study aims to report on the measured in vivo contribution of soft lenses of various powers to the optics of a piggyback system (PBS). This prospective, non-dispensing clinical study was conducted on regular wearers of contact lenses who showed regular corneal profiles. Subjects were masked to the products used. The study involved the use of a spherical soft lens of three different powers in a PBS, used as a carrier for a rigid gas permeable lens. Baseline data were collected and soft lenses were then fitted on both eyes of each subject. Both lenses were assessed for position and movement. Over-refraction was obtained. Soft lens power contribution to the optics (SLPC) of a PBS system was estimated by computing initial ametropia, lacrymal lens, rigid lens powers and over-refraction. A set of data on one eye was kept, for each subject, for statistical analysis. Thirty subjects (12 males, 18 females), aged 24.4 (±4.5) years, were enrolled. The use of plus powered soft lenses enhanced initial RGP lens centration. Once optimal fit was achieved, all lenses showed normal movement. SLPC represented 21.3% of the initial soft lens power when using a -6.00 carrier, and 20.6% for a +6.00. A +0.50 did not contribute to any power induced in the system. These results are generally in accordance with theoretical model developed in the past. On average, except for the low-powered carrier, the use of a spherical soft lens provided 20.9% of its marked power. To achieve better results, the use of a plus-powered carrier is recommended. Copyright © 2013 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Context sensitivity and ambiguity in component-based systems design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bespalko, S.J.; Sindt, A.
1997-10-01
Designers of components-based, real-time systems need to guarantee to correctness of soft-ware and its output. Complexity of a system, and thus the propensity for error, is best characterized by the number of states a component can encounter. In many cases, large numbers of states arise where the processing is highly dependent on context. In these cases, states are often missed, leading to errors. The following are proposals for compactly specifying system states which allow the factoring of complex components into a control module and a semantic processing module. Further, the need for methods that allow for the explicit representation ofmore » ambiguity and uncertainty in the design of components is discussed. Presented herein are examples of real-world problems which are highly context-sensitive or are inherently ambiguous.« less
Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits
NASA Astrophysics Data System (ADS)
Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt
2004-08-01
It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.
Observer detection of image degradation caused by irreversible data compression processes
NASA Astrophysics Data System (ADS)
Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David
1991-05-01
Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.
McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.
2010-01-01
The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.