Hayashi, Norio; Miyati, Tosiaki; Takanaga, Masako; Ohno, Naoki; Hamaguchi, Takashi; Kozaka, Kazuto; Sanada, Shigeru; Yamamoto, Tomoyuki; Matsui, Osamu
2011-01-01
In the direction where the phased array coil used in parallel magnetic resonance imaging (MRI) is perpendicular to the arrangement, sensitivity falls significantly. Moreover, in a 3.0 tesla (3T) abdominal MRI, the quality of the image is reduced by changes in the relaxation time, reinforcement of the magnetic susceptibility effect, etc. In a 3T MRI, which has a high resonant frequency, the signal of the depths (central part) is reduced in the trunk part. SCIC, which is sensitivity correction processing, has inadequate correction processing, such as that edges are emphasized and the central part is corrected. Therefore, we used 3T with a Gaussian distribution. The uneven compensation processing for sensitivity of an abdomen MR image was considered. The correction processing consisted of the following methods. 1) The center of gravity of the domain of the human body in an abdomen MR image was calculated. 2) The correction coefficient map was created from the center of gravity using the Gaussian distribution. 3) The sensitivity correction image was created from the correction coefficient map and the original picture image. Using the Gaussian correction to process the image, the uniformity calculated using the NEMA method was improved significantly compared to the original image of a phantom. In a visual evaluation by radiologists, the uniformity was improved significantly using the Gaussian correction processing. Because of the homogeneous improvement of the abdomen image taken using 3T MRI, the Gaussian correction processing is considered to be a very useful technique.
NASA Technical Reports Server (NTRS)
Chin, R. T.; Beaudet, P. R.
1981-01-01
Large antenna multi-channel microwave radiometer (LAMMR) software specifications were written for LAMMR ground processing. There is a need to determine more computationally-efficient antenna temperature correction methods in compensating side lobe contributions especially near continents, islands and weather fronts. One of the major conclusions was that the antenna pattern corrections (APC) processes did not accomplish the implied goals of compensating for the antenna side lobe influences on brightness temperature. A-priori knowledge of land/water locations was shown to be needed and had to be incorporated in a context sensitive APC process if the artifacts caused by land presence is to be avoided. The high temperatures in land regions can severely bias the lower ocean response.
Parameter Sensitivity Study of the Wall Interference Correction System (WICS)
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Everhart, Joel L.; Iyer, Venkit
2001-01-01
An off-line version of the Wall Interference Correction System (WICS) has been implemented for the "NASA Langley National Transonic Facility. The correction capability is currently restricted to corrections for solid wall interference in the model pitch plane for Mach numbers, less than 0.45 due to a limitation in tunnel calibration data. A study to assess output sensitivity to the aerodynamic parameters of Reynolds number and Mach number was conducted on this code to further ensure quality during the correction process. In addition, this paper includes all investigation into possible correction due to a semispan test technique using a non metric standoff and all improvement to the standard data rejection algorithm.
Sensitivity Study of the Wall Interference Correction System (WICS) for Rectangular Tunnels
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Everhart, Joel L.; Iyer, Venkit
2001-01-01
An off-line version of the Wall Interference Correction System (WICS) has been implemented for the NASA Langley National Transonic Facility. The correction capability is currently restricted to corrections for solid wall interference in the model pitch plane for Mach numbers less than 0.45 due to a limitation in tunnel calibration data. A study to assess output sensitivity to measurement uncertainty was conducted to determine standard operational procedures and guidelines to ensure data quality during the testing process. Changes to the current facility setup and design recommendations for installing the WICS code into a new facility are reported.
Sensitivity to Stroke Emerges in Kindergartners Reading Chinese Script
Li, Su; Yin, Li
2017-01-01
To what extent are young children sensitive to individual stroke, the smallest unit of writing in Chinese that carries no phonological or semantic information? The present study examined Chinese kindergartners’ sensitivity to stroke and the contribution of reading ability and age to stroke sensitivity. Fifty five children from Beijing, including 28 4-year-olds (Mage = 4.55 years, SD = 0.28, 16 males) and 29 5-year-olds (Mage = 5.58 years, SD = 0.30, 14 males), were administered an orthographic matching task and assessed on non-verbal IQ and Chinese word reading. In the orthographic matching task, children were asked to decide whether two items were exactly the same or different in three conditions, with stimuli being correctly written characters (e.g., “”), stroke-missing or redundant characters (e.g., “”), and Tibetan alphabets (e.g., “”), respectively. The stimuli were presented with E-prime 2.0 software and were displayed on a Surface Pro. Children responded by touching the screen and reaction time was used as a measure of processing efficiency. The 5-year-olds but not the 4-year-olds processed correctly written characters more efficiently than stroke-missing/redundant characters, suggesting emergence of stroke sensitivity from age 5. The 4- and 5-year-olds both processed correctly written characters more efficiently than Tibetan alphabets, ruling out the possibility that the 5 year olds’ sensitivity to stroke was due to the unusual look of the stimuli. Hierarchical regression analyses showed that Chinese word reading explained 10% additional variance in stroke sensitivity after having statistically controlled for age. Age did not account for additional variance in stroke sensitivity after having considered Chinese word reading. Taken together, findings of this study revealed that despite the visually highly complex nature of Chinese and the fact that individual stroke carries no phonological or semantic information, children develop sensitivity to stroke from age 5 and such sensitivity is significantly associated with reading experience. PMID:28626438
Sensitivity to Stroke Emerges in Kindergartners Reading Chinese Script.
Li, Su; Yin, Li
2017-01-01
To what extent are young children sensitive to individual stroke, the smallest unit of writing in Chinese that carries no phonological or semantic information? The present study examined Chinese kindergartners' sensitivity to stroke and the contribution of reading ability and age to stroke sensitivity. Fifty five children from Beijing, including 28 4-year-olds ( M age = 4.55 years, SD = 0.28, 16 males) and 29 5-year-olds ( M age = 5.58 years, SD = 0.30, 14 males), were administered an orthographic matching task and assessed on non-verbal IQ and Chinese word reading. In the orthographic matching task, children were asked to decide whether two items were exactly the same or different in three conditions, with stimuli being correctly written characters (e.g., "), stroke-missing or redundant characters (e.g., "), and Tibetan alphabets (e.g., "), respectively. The stimuli were presented with E-prime 2.0 software and were displayed on a Surface Pro. Children responded by touching the screen and reaction time was used as a measure of processing efficiency. The 5-year-olds but not the 4-year-olds processed correctly written characters more efficiently than stroke-missing/redundant characters, suggesting emergence of stroke sensitivity from age 5. The 4- and 5-year-olds both processed correctly written characters more efficiently than Tibetan alphabets, ruling out the possibility that the 5 year olds' sensitivity to stroke was due to the unusual look of the stimuli. Hierarchical regression analyses showed that Chinese word reading explained 10% additional variance in stroke sensitivity after having statistically controlled for age. Age did not account for additional variance in stroke sensitivity after having considered Chinese word reading. Taken together, findings of this study revealed that despite the visually highly complex nature of Chinese and the fact that individual stroke carries no phonological or semantic information, children develop sensitivity to stroke from age 5 and such sensitivity is significantly associated with reading experience.
Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay
Huang, Jian
2013-03-12
A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.
Context-Sensitive Spelling Correction of Consumer-Generated Content on Health Care
Chen, Rudan; Zhao, Xianyang; Xu, Wei; Cheng, Wenqing; Lin, Simon
2015-01-01
Background Consumer-generated content, such as postings on social media websites, can serve as an ideal source of information for studying health care from a consumer’s perspective. However, consumer-generated content on health care topics often contains spelling errors, which, if not corrected, will be obstacles for downstream computer-based text analysis. Objective In this study, we proposed a framework with a spelling correction system designed for consumer-generated content and a novel ontology-based evaluation system which was used to efficiently assess the correction quality. Additionally, we emphasized the importance of context sensitivity in the correction process, and demonstrated why correction methods designed for electronic medical records (EMRs) failed to perform well with consumer-generated content. Methods First, we developed our spelling correction system based on Google Spell Checker. The system processed postings acquired from MedHelp, a biomedical bulletin board system (BBS), and saved misspelled words (eg, sertaline) and corresponding corrected words (eg, sertraline) into two separate sets. Second, to reduce the number of words needing manual examination in the evaluation process, we respectively matched the words in the two sets with terms in two biomedical ontologies: RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT). The ratio of words which could be matched and appropriately corrected was used to evaluate the correction system’s overall performance. Third, we categorized the misspelled words according to the types of spelling errors. Finally, we calculated the ratio of abbreviations in the postings, which remarkably differed between EMRs and consumer-generated content and could largely influence the overall performance of spelling checkers. Results An uncorrected word and the corresponding corrected word was called a spelling pair, and the two words in the spelling pair were its members. In our study, there were 271 spelling pairs detected, among which 58 (21.4%) pairs had one or two members matched in the selected ontologies. The ratio of appropriate correction in the 271 overall spelling errors was 85.2% (231/271). The ratio of that in the 58 spelling pairs was 86% (50/58), close to the overall ratio. We also found that linguistic errors took up 31.4% (85/271) of all errors detected, and only 0.98% (210/21,358) of words in the postings were abbreviations, which was much lower than the ratio in the EMRs (33.6%). Conclusions We conclude that our system can accurately correct spelling errors in consumer-generated content. Context sensitivity is indispensable in the correction process. Additionally, it can be confirmed that consumer-generated content differs from EMRs in that consumers seldom use abbreviations. Also, the evaluation method, taking advantage of biomedical ontology, can effectively estimate the accuracy of the correction system and reduce manual examination time. PMID:26232246
Context-Sensitive Spelling Correction of Consumer-Generated Content on Health Care.
Zhou, Xiaofang; Zheng, An; Yin, Jiaheng; Chen, Rudan; Zhao, Xianyang; Xu, Wei; Cheng, Wenqing; Xia, Tian; Lin, Simon
2015-07-31
Consumer-generated content, such as postings on social media websites, can serve as an ideal source of information for studying health care from a consumer's perspective. However, consumer-generated content on health care topics often contains spelling errors, which, if not corrected, will be obstacles for downstream computer-based text analysis. In this study, we proposed a framework with a spelling correction system designed for consumer-generated content and a novel ontology-based evaluation system which was used to efficiently assess the correction quality. Additionally, we emphasized the importance of context sensitivity in the correction process, and demonstrated why correction methods designed for electronic medical records (EMRs) failed to perform well with consumer-generated content. First, we developed our spelling correction system based on Google Spell Checker. The system processed postings acquired from MedHelp, a biomedical bulletin board system (BBS), and saved misspelled words (eg, sertaline) and corresponding corrected words (eg, sertraline) into two separate sets. Second, to reduce the number of words needing manual examination in the evaluation process, we respectively matched the words in the two sets with terms in two biomedical ontologies: RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT). The ratio of words which could be matched and appropriately corrected was used to evaluate the correction system's overall performance. Third, we categorized the misspelled words according to the types of spelling errors. Finally, we calculated the ratio of abbreviations in the postings, which remarkably differed between EMRs and consumer-generated content and could largely influence the overall performance of spelling checkers. An uncorrected word and the corresponding corrected word was called a spelling pair, and the two words in the spelling pair were its members. In our study, there were 271 spelling pairs detected, among which 58 (21.4%) pairs had one or two members matched in the selected ontologies. The ratio of appropriate correction in the 271 overall spelling errors was 85.2% (231/271). The ratio of that in the 58 spelling pairs was 86% (50/58), close to the overall ratio. We also found that linguistic errors took up 31.4% (85/271) of all errors detected, and only 0.98% (210/21,358) of words in the postings were abbreviations, which was much lower than the ratio in the EMRs (33.6%). We conclude that our system can accurately correct spelling errors in consumer-generated content. Context sensitivity is indispensable in the correction process. Additionally, it can be confirmed that consumer-generated content differs from EMRs in that consumers seldom use abbreviations. Also, the evaluation method, taking advantage of biomedical ontology, can effectively estimate the accuracy of the correction system and reduce manual examination time.
Data-driven sensitivity inference for Thomson scattering electron density measurement systems.
Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro
2017-01-01
We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.
SENSIT.FOR: A program for sensitometric reduction
NASA Astrophysics Data System (ADS)
Maury, A.; Marchal, J.
1984-09-01
A FORTRAN program for sensitometric evaluation of processes involved in hypering astronomical plates was written. It contains subroutines for full or quick description of the operation being done; choice of type of sensitogram; creation of 16 subfiles in the scan; density filtering; correction for area; specular PDS to diffuse ISO density calibration; and fog correction.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)
2001-01-01
A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.
Artifact Correction in Temperature-Dependent Attenuated Total Reflection Infrared (ATR-IR) Spectra.
Sobieski, Brian; Chase, Bruce; Noda, Isao; Rabolt, John
2017-08-01
A spectral processing method was developed and tested for analyzing temperature-dependent attenuated total reflection infrared (ATR-IR) spectra of aliphatic polyesters. Spectra of a bio-based, biodegradable polymer, 3.9 mol% 3HHx poly[(R)-3-hydroxybutyrate- co-(R)-3-hydroxyhexanoate] (PHBHx), were analyzed and corrected prior to analysis using two-dimensional correlation spectroscopy (2D-COS). Removal of the temperature variation of diamond absorbance, correction of the baseline, ATR correction, and appropriate normalization were key to generating more reliable data. Both the processing steps and order were important. A comparison to differential scanning calorimetry (DSC) analysis indicated that the normalization method should be chosen with caution to avoid unintentional trends and distortions of the crystalline sensitive bands.
NASA Technical Reports Server (NTRS)
Meister, Gerhard; Franz, Bryan A.
2011-01-01
The Moderate-Resolution Imaging Spectroradiometer (MODIS) on NASA s Earth Observing System (EOS) satellite Terra provides global coverage of top-of-atmosphere (TOA) radiances that have been successfully used for terrestrial and atmospheric research. The MODIS Terra ocean color products, however, have been compromised by an inadequate radiometric calibration at the short wavelengths. The Ocean Biology Processing Group (OBPG) at NASA has derived radiometric corrections using ocean color products from the SeaWiFS sensor as truth fields. In the R2010.0 reprocessing, these corrections have been applied to the whole mission life span of 10 years. This paper presents the corrections to the radiometric gains and to the instrument polarization sensitivity, demonstrates the improvement to the Terra ocean color products, and discusses issues that need further investigation. Although the global averages of MODIS Terra ocean color products are now in excellent agreement with those of SeaWiFS and MODIS Aqua, and image quality has been significantly improved, the large corrections applied to the radiometric calibration and polarization sensitivity require additional caution when using the data.
Finding the bottom and using it
Sandoval, Ruben M.; Wang, Exing; Molitoris, Bruce A.
2014-01-01
Maximizing 2-photon parameters used in acquiring images for quantitative intravital microscopy, especially when high sensitivity is required, remains an open area of investigation. Here we present data on correctly setting the black level of the photomultiplier tube amplifier by adjusting the offset to allow for accurate quantitation of low intensity processes. When the black level is set too high some low intensity pixel values become zero and a nonlinear degradation in sensitivity occurs rendering otherwise quantifiable low intensity values virtually undetectable. Initial studies using a series of increasing offsets for a sequence of concentrations of fluorescent albumin in vitro revealed a loss of sensitivity for higher offsets at lower albumin concentrations. A similar decrease in sensitivity, and therefore the ability to correctly determine the glomerular permeability coefficient of albumin, occurred in vivo at higher offset. Finding the offset that yields accurate and linear data are essential for quantitative analysis when high sensitivity is required. PMID:25313346
ERIC Educational Resources Information Center
Cronin, Linda L.; Padilla, Michael J.
1984-01-01
Describes science activities related to endangered species designed to sensitize students to the process of extinction; learn of the human role in that process; and emphasize the importance of National Wildlife Week. Provides activities and games such as drift traps, webbing, making corrections, What animal am I?, and Noah's Ark. (JM)
Conflict and Criterion Setting in Recognition Memory
ERIC Educational Resources Information Center
Curran, Tim; DeBuse, Casey; Leynes, P. Andrew
2007-01-01
Recognition memory requires both retrieval processes and control processes such as criterion setting. Decision criteria were manipulated by offering different payoffs for correct "old" versus "new" responses. Criterion setting influenced the following late-occurring (1,000+ ms), conflict-sensitive event-related brain potential (ERP) components:…
Taylor, Paul A; Alhamud, A; van der Kouwe, Andre; Saleh, Muhammad G; Laughton, Barbara; Meintjes, Ernesta
2016-12-01
Diffusion tensor imaging (DTI) is susceptible to several artifacts due to eddy currents, echo planar imaging (EPI) distortion and subject motion. While several techniques correct for individual distortion effects, no optimal combination of DTI acquisition and processing has been determined. Here, the effects of several motion correction techniques are investigated while also correcting for EPI distortion: prospective correction, using navigation; retrospective correction, using two different popular packages (FSL and TORTOISE); and the combination of both methods. Data from a pediatric group that exhibited incidental motion in varying degrees are analyzed. Comparisons are carried while implementing eddy current and EPI distortion correction. DTI parameter distributions, white matter (WM) maps and probabilistic tractography are examined. The importance of prospective correction during data acquisition is demonstrated. In contrast to some previous studies, results also show that the inclusion of retrospective processing also improved ellipsoid fits and both the sensitivity and specificity of group tractographic results, even for navigated data. Matches with anatomical WM maps are highest throughout the brain for data that have been both navigated and processed using TORTOISE. The inclusion of both prospective and retrospective motion correction with EPI distortion correction is important for DTI analysis, particularly when studying subject populations that are prone to motion. Hum Brain Mapp 37:4405-4424, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Assessment of Terra MODIS On-Orbit Polarization Sensitivity Using Pseudoinvariant Desert Sites
NASA Technical Reports Server (NTRS)
Wu, Aisheng; Geng, Xu; Wald, Andrew; Angal, Amit; Xiong, Xiaoxiong
2017-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) is currently flying on NASA's Earth Observing System Terra and Aqua satellites, launched in 1999 and 2002, respectively. MODIS reflective solar bands in the visible wavelength range are known to be sensitive to polarized light based on prelaunch polarization sensitivity tests. After about five years of on-orbit operations, it was discovered that the polarization sensitivity at short wavelengths had shown a noticeable increase. In this paper, we examine the impact of polarization on measured top-of-atmosphere (TOA) reflectance based on MODIS Collection-6 L1B over pseudo invariant desert sites. The standard polarization correction equation is used in combination with simulated at-sensor radiances using the second simulation of a satellite signal in the Solar Spectrum, Vector Radiative Transfer Code (6SV). We ignore the polarization contribution from the surface and a ratio approach is used for both 6SV-derived in put parameters and observed TOA reflectance. Results indicate that significant gain corrections up to 25% are required near the end of scan for the 412 and 443 nm bands. The polarization correction reduces the seasonal fluctuations in reflectance trends and mirror side ratios from 30% and 12% to 10% and 5%, respectively, for the two bands. Comparison of the effectiveness of the polarization correction with the results from the NASA Ocean Biology Processing Group shows a good agreement in the corrected reflectance trending results and their seasonal fluctuations.
Monitoring others' errors: The role of the motor system in early childhood and adulthood.
Meyer, Marlene; Braukmann, Ricarda; Stapel, Janny C; Bekkering, Harold; Hunnius, Sabine
2016-03-01
Previous research demonstrates that from early in life, our cortical sensorimotor areas are activated both when performing and when observing actions (mirroring). Recent findings suggest that the adult motor system is also involved in detecting others' rule violations. Yet, how this translates to everyday action errors (e.g., accidentally dropping something) and how error-sensitive motor activity for others' actions emerges are still unknown. In this study, we examined the role of the motor system in error monitoring. Participants observed successful and unsuccessful pincer grasp actions while their electroencephalography was registered. We tested infants (8- and 14-month-olds) at different stages of learning the pincer grasp and adults as advanced graspers. Power in Alpha- and Beta-frequencies was analysed to assess motor and visual processing. Adults showed enhanced motor activity when observing erroneous actions. However, neither 8- nor 14-month-olds displayed this error sensitivity, despite showing motor activity for both actions. All groups did show similar visual activity, that is more Alpha-suppression, when observing correct actions. Thus, while correct and erroneous actions were processed as visually distinct in all age groups, only the adults' motor system was sensitive to action correctness. Functionality of different brain oscillations in the development of error monitoring and mirroring is discussed. © 2015 The British Psychological Society.
Can corrective feedback improve recognition memory?
Kantner, Justin; Lindsay, D Stephen
2010-06-01
An understanding of the effects of corrective feedback on recognition memory can inform both recognition theory and memory training programs, but few published studies have investigated the issue. Although the evidence to date suggests that feedback does not improve recognition accuracy, few studies have directly examined its effect on sensitivity, and fewer have created conditions that facilitate a feedback advantage by encouraging controlled processing at test. In Experiment 1, null effects of feedback were observed following both deep and shallow encoding of categorized study lists. In Experiment 2, feedback robustly influenced response bias by allowing participants to discern highly uneven base rates of old and new items, but sensitivity remained unaffected. In Experiment 3, a false-memory procedure, feedback failed to attenuate false recognition of critical lures. In Experiment 4, participants were unable to use feedback to learn a simple category rule separating old items from new items, despite the fact that feedback was of substantial benefit in a nearly identical categorization task. The recognition system, despite a documented ability to utilize controlled strategic or inferential decision-making processes, appears largely impenetrable to a benefit of corrective feedback.
Underlying Information Technology Tailored Quantum Error Correction
2006-07-28
typically constructed by using an optical beam splitter . • We used a decoherence-free-subspace encoding to reduce the sensitivity of an optical Deutsch...simplification of design constraints in solid state QC (incl. quantum dots and superconducting qubits), hybrid quantum error correction and prevention methods...process tomography on one- and two-photon polarisation states, from full and partial data "• Accomplished complete two-photon QPT. "• Discovered surprising
[Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].
Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie
2013-11-01
In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.
Corrections for the geometric distortion of the tube detectors on SANS instruments at ORNL
He, Lilin; Do, Changwoo; Qian, Shuo; ...
2014-11-25
Small-angle neutron scattering instruments at the Oak Ridge National Laboratory's High Flux Isotope Reactor were upgraded in area detectors from the large, single volume crossed-wire detectors originally installed to staggered arrays of linear position-sensitive detectors (LPSDs). The specific geometry of the LPSD array requires that approaches to data reduction traditionally employed be modified. Here, two methods for correcting the geometric distortion produced by the LPSD array are presented and compared. The first method applies a correction derived from a detector sensitivity measurement performed using the same configuration as the samples are measured. In the second method, a solid angle correctionmore » is derived that can be applied to data collected in any instrument configuration during the data reduction process in conjunction with a detector sensitivity measurement collected at a sufficiently long camera length where the geometric distortions are negligible. Furthermore, both methods produce consistent results and yield a maximum deviation of corrected data from isotropic scattering samples of less than 5% for scattering angles up to a maximum of 35°. The results are broadly applicable to any SANS instrument employing LPSD array detectors, which will be increasingly common as instruments having higher incident flux are constructed at various neutron scattering facilities around the world.« less
Hybrid overlay metrology for high order correction by using CDSEM
NASA Astrophysics Data System (ADS)
Leray, Philippe; Halder, Sandip; Lorusso, Gian; Baudemprez, Bart; Inoue, Osamu; Okagawa, Yutaka
2016-03-01
Overlay control has become one of the most critical issues for semiconductor manufacturing. Advanced lithographic scanners use high-order corrections or correction per exposure to reduce the residual overlay. It is not enough in traditional feedback of overlay measurement by using ADI wafer because overlay error depends on other process (etching process and film stress, etc.). It needs high accuracy overlay measurement by using AEI wafer. WIS (Wafer Induced Shift) is the main issue for optical overlay, IBO (Image Based Overlay) and DBO (Diffraction Based Overlay). We design dedicated SEM overlay targets for dual damascene process of N10 by i-ArF multi-patterning. The pattern is same as device-pattern locally. Optical overlay tools select segmented pattern to reduce the WIS. However segmentation has limit, especially the via-pattern, for keeping the sensitivity and accuracy. We evaluate difference between the viapattern and relaxed pitch gratings which are similar to optical overlay target at AEI. CDSEM can estimate asymmetry property of target from image of pattern edge. CDSEM can estimate asymmetry property of target from image of pattern edge. We will compare full map of SEM overlay to full map of optical overlay for high order correction ( correctables and residual fingerprints).
Settivari, Raja S; Gehen, Sean C; Amado, Ricardo Acosta; Visconti, Nicolo R; Boverhof, Darrell R; Carney, Edward W
2015-07-01
Assessment of skin sensitization potential is an important component of the safety evaluation process for agrochemical products. Recently, non-animal approaches including the KeratinoSens™ assay have been developed for predicting skin sensitization potential. Assessing the utility of the KeratinoSens™ assay for use with multi-component mixtures such as agrochemical formulations has not been previously evaluated and is a significant need. This study was undertaken to evaluate the KeratinoSens™ assay prediction potential for agrochemical formulations. The assay was conducted for 8 agrochemical active ingredients (AIs) including 3 sensitizers (acetochlor, meptyldinocap, triclopyr), 5 non-sensitizers (aminopyralid, clopyralid, florasulam, methoxyfenozide, oxyfluorfen) and 10 formulations for which in vivo sensitization data were available. The KeratinoSens™ correctly predicted the sensitization potential of all the AIs. For agrochemical formulations it was necessary to modify the standard assay procedure whereby the formulation was assumed to have a common molecular weight. The resultant approach correctly predicted the sensitization potential for 3 of 4 sensitizing formulations and all 6 non-sensitizing formulations when compared to in vivo data. Only the meptyldinocap-containing formulation was misclassified, as a result of high cytotoxicity. These results demonstrate the promising utility of the KeratinoSens™ assay for evaluating the skin sensitization potential of agrochemical AIs and formulations. Copyright © 2015 Elsevier Inc. All rights reserved.
Seuss, Hannes; Dankerl, Peter; Cavallaro, Alexander; Uder, Michael; Hammon, Matthias
2016-05-20
To evaluate screening and diagnostic accuracy for the detection of osteoblastic rib lesions using an advanced post-processing package enabling in-plane rib reading in CT-images. We retrospectively assessed the CT-data of 60 consecutive prostate cancer patients by applying dedicated software enabling in-plane rib reading. Reading the conventional multiplanar reconstructions was considered to be the reference standard. To simulate clinical practice, the reader was given 10 s to screen for sclerotic rib lesions in each patient applying both approaches. Afterwards, every rib was evaluated individually with both approaches without a time limit. Sensitivities, specificities, positive/negative predictive values and the time needed for detection were calculated depending on the lesion's size (largest diameter < 5 mm, 5-10 mm, > 10 mm). In 53 of 60 patients, all ribs were properly displayed in plane, in five patients ribs were partially displayed correctly, and in two patients none of the ribs were displayed correctly. During the 10-s screening approach all patients with sclerotic rib lesions were correctly identified reading the in-plane images (including the patients without a correct rib segmentation), whereas 14 of 23 patients were correctly identified reading conventional multiplanar images. Overall screening sensitivity, specificity, and positive/negative predictive values were 100/27.0/46.0/100 %, respectively, for in-plane reading and 60.9/100/100/80.4 %, respectively, for multiplanar reading. Overall diagnostic (no time limit) sensitivity, specificity, and positive/negative predictive values of in-plane reading were 97.8/92.8/74.6/99.5 %, respectively. False positive results predominantly occurred for lesions <5 mm in size. In-plane reading of the ribs allows reliable detection of osteoblastic lesions for screening purposes. The limited specificity results from false positives predominantly occurring for small lesions.
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
Dissipative quantum error correction and application to quantum sensing with trapped ions.
Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A
2017-11-28
Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
A UMLS-based spell checker for natural language processing in vaccine safety.
Tolentino, Herman D; Matters, Michael D; Walop, Wikke; Law, Barbara; Tong, Wesley; Liu, Fang; Fontelo, Paul; Kohl, Katrin; Payne, Daniel C
2007-02-12
The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74-75), 100% (95% CI: 100-100), and 47% (95% CI: 46%-48%), respectively. We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest.
A UMLS-based spell checker for natural language processing in vaccine safety
Tolentino, Herman D; Matters, Michael D; Walop, Wikke; Law, Barbara; Tong, Wesley; Liu, Fang; Fontelo, Paul; Kohl, Katrin; Payne, Daniel C
2007-01-01
Background The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. Methods We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. Results We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74–75), 100% (95% CI: 100–100), and 47% (95% CI: 46%–48%), respectively. Conclusion We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest. PMID:17295907
Challenges and requirements of mask data processing for multi-beam mask writer
NASA Astrophysics Data System (ADS)
Choi, Jin; Lee, Dong Hyun; Park, Sinjeung; Lee, SookHyun; Tamamushi, Shuichi; Shin, In Kyun; Jeon, Chan Uk
2015-07-01
To overcome the resolution and throughput of current mask writer for advanced lithography technologies, the platform of e-beam writer have been evolved by the developments of hardware and software in writer. Especially, aggressive optical proximity correction (OPC) for unprecedented extension of optical lithography and the needs of low sensitivity resist for high resolution result in the limit of variable shaped beam writer which is widely used for mass production. The multi-beam mask writer is attractive candidate for photomask writing of sub-10nm device because of its high speed and the large degree of freedom which enable high dose and dose modulation for each pixel. However, the higher dose and almost unlimited appetite for dose modulation challenge the mask data processing (MDP) in aspects of extreme data volume and correction method. Here, we discuss the requirements of mask data processing for multi-beam mask writer and presents new challenges of the data format, data flow, and correction method for user and supplier MDP tool.
Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook
2007-03-01
To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.
Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I
2018-02-01
Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.
Geeleher, Paul; Cox, Nancy J; Huang, R Stephanie
2016-09-21
We show that variability in general levels of drug sensitivity in pre-clinical cancer models confounds biomarker discovery. However, using a very large panel of cell lines, each treated with many drugs, we could estimate a general level of sensitivity to all drugs in each cell line. By conditioning on this variable, biomarkers were identified that were more likely to be effective in clinical trials than those identified using a conventional uncorrected approach. We find that differences in general levels of drug sensitivity are driven by biologically relevant processes. We developed a gene expression based method that can be used to correct for this confounder in future studies.
Skoruppa, Katrin; Rosen, Stuart
2014-06-01
In this study, the authors explored phonological processing in connected speech in children with hearing loss. Specifically, the authors investigated these children's sensitivity to English place assimilation, by which alveolar consonants like t and n can adapt to following sounds (e.g., the word ten can be realized as tem in the phrase ten pounds). Twenty-seven 4- to 8-year-old children with moderate to profound hearing impairments, using hearing aids (n = 10) or cochlear implants (n = 17), and 19 children with normal hearing participated. They were asked to choose between pictures of familiar (e.g., pen) and unfamiliar objects (e.g., astrolabe) after hearing t- and n-final words in sentences. Standard pronunciations (Can you find the pen dear?) and assimilated forms in correct (… pem please?) and incorrect contexts (… pem dear?) were presented. As expected, the children with normal hearing chose the familiar object more often for standard forms and correct assimilations than for incorrect assimilations. Thus, they are sensitive to word-final place changes and compensate for assimilation. However, the children with hearing impairment demonstrated reduced sensitivity to word-final place changes, and no compensation for assimilation. Restricted analyses revealed that children with hearing aids who showed good perceptual skills compensated for assimilation in plosives only.
Automated plasma control with optical emission spectroscopy
NASA Astrophysics Data System (ADS)
Ward, P. P.
Plasma etching and desmear processes for printed wiring board (PWB) manufacture are difficult to predict and control. Non-uniformity of most plasma processes and sensitivity to environmental changes make it difficult to maintain process stability from day to day. To assure plasma process performance, weight loss coupons or post-plasma destructive testing must be used. These techniques are not real-time methods however, and do not allow for immediate diagnosis and process correction. These tests often require scrapping some fraction of a batch to insure the integrity of the rest. Since these tests verify a successful cycle with post-plasma diagnostics, poor test results often determine that a batch is substandard and the resulting parts unusable. These tests are a costly part of the overall fabrication cost. A more efficient method of testing would allow for constant monitoring of plasma conditions and process control. Process anomalies should be detected and corrected before the parts being treated are damaged. Real time monitoring would allow for instantaneous corrections. Multiple site monitoring would allow for process mapping within one system or simultaneous monitoring of multiple systems. Optical emission spectroscopy conducted external to the plasma apparatus would allow for this sort of multifunctional analysis without perturbing the glow discharge. In this paper, optical emission spectroscopy for non-intrusive, in situ process control will be explored along with applications of this technique to for process control, failure analysis and endpoint determination in PWB manufacture.
Loop corrections to primordial fluctuations from inflationary phase transitions
NASA Astrophysics Data System (ADS)
Wu, Yi-Peng; Yokoyama, Jun'ichi
2018-05-01
We investigate loop corrections to the primordial fluctuations in the single-field inflationary paradigm from spectator fields that experience a smooth transition of their vacuum expectation values. We show that when the phase transition involves a classical evolution effectively driven by a negative mass term from the potential, important corrections to the curvature perturbation can be generated by field perturbations that are frozen outside the horizon by the time of the phase transition, yet the correction to tensor perturbation is naturally suppressed by the spatial derivative couplings between spectator fields and graviton. At one-loop level, the dominant channel for the production of primordial fluctuations comes from a pair-scattering of free spectator fields that decay into the curvature perturbations, and this decay process is only sensitive to field masses comparable to the Hubble scale of inflation.
ERIC Educational Resources Information Center
Singh, Leher; Tan, Aloysia; Wewalaarachchi, Thilanga D.
2017-01-01
Children undergo gradual progression in their ability to differentiate correct and incorrect pronunciations of words, a process that is crucial to establishing a native vocabulary. For the most part, the development of mature phonological representations has been researched by investigating children's sensitivity to consonant and vowel variation,…
Gordon, H R; Du, T; Zhang, T
1997-09-20
We provide an analysis of the influence of instrument polarization sensitivity on the radiance measured by spaceborne ocean color sensors. Simulated examples demonstrate the influence of polarization sensitivity on the retrieval of the water-leaving reflectance rho(w). A simple method for partially correcting for polarization sensitivity--replacing the linear polarization properties of the top-of-atmosphere reflectance with those from a Rayleigh-scattering atmosphere--is provided and its efficacy is evaluated. It is shown that this scheme improves rho(w) retrievals as long as the polarization sensitivity of the instrument does not vary strongly from band to band. Of course, a complete polarization-sensitivity characterization of the ocean color sensor is required to implement the correction.
Development of a drift-correction procedure for a direct-reading spectrometer
NASA Technical Reports Server (NTRS)
Chapman, G. B., II; Gordon, W. A.
1977-01-01
A procedure which provides automatic correction for drifts in the radiometric sensitivity of each detector channel in a direct-reading emission spectrometer is described. Such drifts are customarily controlled by the regular analyses of standards, which provide corrections for changes in the excitational, optical, and electronic components of the instrument. This standardization procedure, however, corrects for the optical and electronic drifts. It is a step that must be taken if the time, effort, and cost of processing standards is to be minimized. This method of radiometric drift correction uses a 1,000-W tungsten-halogen reference lamp to illuminate each detector through the same optical path as that traversed during sample analysis. The responses of the detector channels to this reference light are regularly compared with channel response to the same light intensity at the time of analytical calibration in order to determine and correct for drift. Except for placing the lamp in position, the procedure is fully automated and compensates for changes in spectral intensity due to variations in lamp current. A discussion of the implementation of this drift-correction system is included.
Assessment of bias correction under transient climate change
NASA Astrophysics Data System (ADS)
Van Schaeybroeck, Bert; Vannitsem, Stéphane
2015-04-01
Calibration of climate simulations is necessary since large systematic discrepancies are generally found between the model climate and the observed climate. Recent studies have cast doubt upon the common assumption of the bias being stationary when the climate changes. This led to the development of new methods, mostly based on linear sensitivity of the biases as a function of time or forcing (Kharin et al. 2012). However, recent studies uncovered more fundamental problems using both low-order systems (Vannitsem 2011) and climate models, showing that the biases may display complicated non-linear variations under climate change. This last analysis focused on biases derived from the equilibrium climate sensitivity, thereby ignoring the effect of the transient climate sensitivity. Based on the linear response theory, a general method of bias correction is therefore proposed that can be applied on any climate forcing scenario. The validity of the method is addressed using twin experiments with a climate model of intermediate complexity LOVECLIM (Goosse et al., 2010). We evaluate to what extent the bias change is sensitive to the structure (frequency) of the applied forcing (here greenhouse gases) and whether the linear response theory is valid for global and/or local variables. To answer these question we perform large-ensemble simulations using different 300-year scenarios of forced carbon-dioxide concentrations. Reality and simulations are assumed to differ by a model error emulated as a parametric error in the wind drag or in the radiative scheme. References [1] H. Goosse et al., 2010: Description of the Earth system model of intermediate complexity LOVECLIM version 1.2, Geosci. Model Dev., 3, 603-633. [2] S. Vannitsem, 2011: Bias correction and post-processing under climate change, Nonlin. Processes Geophys., 18, 911-924. [3] V.V. Kharin, G. J. Boer, W. J. Merryfield, J. F. Scinocca, and W.-S. Lee, 2012: Statistical adjustment of decadal predictions in a changing climate, Geophys. Res. Lett., 39, L19705.
Liu, Rong; Zhou, Jiawei; Zhao, Haoxin; Dai, Yun; Zhang, Yudong; Tang, Yong; Zhou, Yifeng
2014-01-01
This study aimed to explore the neural development status of the visual system of children (around 8 years old) using contrast sensitivity. We achieved this by eliminating the influence of higher order aberrations (HOAs) with adaptive optics correction. We measured HOAs, modulation transfer functions (MTFs) and contrast sensitivity functions (CSFs) of six children and five adults with both corrected and uncorrected HOAs. We found that when HOAs were corrected, children and adults both showed improvements in MTF and CSF. However, the CSF of children was still lower than the adult level, indicating the difference in contrast sensitivity between groups cannot be explained by differences in optical factors. Further study showed that the difference between the groups also could not be explained by differences in non-visual factors. With these results we concluded that the neural systems underlying vision in children of around 8 years old are still immature in contrast sensitivity. PMID:24732728
Nasrallah, Maha; Carmel, David; Lavie, Nilli
2009-01-01
Enhanced sensitivity to information of negative (compared to positive) valence has an adaptive value, for example, by expediting the correct choice of avoidance behavior. However, previous evidence for such enhanced sensitivity has been inconclusive. Here we report a clear advantage for negative over positive words in categorizing them as emotional. In 3 experiments, participants classified briefly presented (33 ms or 22 ms) masked words as emotional or neutral. Categorization accuracy and valence-detection sensitivity were both higher for negative than for positive words. The results were not due to differences between emotion categories in either lexical frequency, extremeness of valence ratings, or arousal. These results conclusively establish enhanced sensitivity for negative over positive words, supporting the hypothesis that negative stimuli enjoy preferential access to perceptual processing. PMID:19803583
Peripheral refractive correction and automated perimetric profiles.
Wild, J M; Wood, J M; Crews, S J
1988-06-01
The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.
Data assimilation of GNSS zenith total delays from a Nordic processing centre
NASA Astrophysics Data System (ADS)
Lindskog, Magnus; Ridal, Martin; Thorsteinsson, Sigurdur; Ning, Tong
2017-11-01
Atmospheric moisture-related information estimated from Global Navigation Satellite System (GNSS) ground-based receiver stations by the Nordic GNSS Analysis Centre (NGAA) have been used within a state-of-the-art kilometre-scale numerical weather prediction system. Different processing techniques have been implemented to derive the moisture-related GNSS information in the form of zenith total delays (ZTDs) and these are described and compared. In addition full-scale data assimilation and modelling experiments have been carried out to investigate the impact of utilizing moisture-related GNSS data from the NGAA processing centre on a numerical weather prediction (NWP) model initial state and on the ensuing forecast quality. The sensitivity of results to aspects of the data processing, station density, bias-correction and data assimilation have been investigated. Results show benefits to forecast quality when using GNSS ZTD as an additional observation type. The results also show a sensitivity to thinning distance applied for GNSS ZTD observations but not to modifications to the number of predictors used in the variational bias correction applied. In addition, it is demonstrated that the assimilation of GNSS ZTD can benefit from more general data assimilation enhancements and that there is an interaction of GNSS ZTD with other types of observations used in the data assimilation. Future plans include further investigation of optimal thinning distances and application of more advanced data assimilation techniques.
Dutton, Daniel J; McLaren, Lindsay
2014-05-06
National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18-65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23-28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
Vessel-Mounted ADCP Data Calibration and Correction
NASA Astrophysics Data System (ADS)
de Andrade, A. F.; Barreira, L. M.; Violante-Carvalho, N.
2013-05-01
A set of scripts for vessel-mounted ADCP (Acoustic Doppler Current Profiler) data processing is presented. The need for corrections in the data measured by a ship-mounted ADCP and the complexities found during installation, implementation and identification of tasks performed by currently available systems for data processing consist the main motivating factors for the development of a system that would be more practical in manipulation, open code and more manageable for the user. The proposed processing system consists of a set of scripts developed in Matlab TM programming language. The system is able to read the binary files provided by the data acquisition program VMDAS (Vessel Mounted Data Acquisition System), Teledyne RDInstruments proprietary, and calculate calibration factors to correct the data and visualize them after correction. For use the new system, it is only necessary that the ADCP data collected with VMDAS program is in a processing diretory and Matlab TM software be installed on the user's computer. Developed algorithms were extensively tested with ADCP data obtained during Oceano Sul III (Southern Ocean III - OSIII) cruise, conducted by Brazilian Navy aboard the R/V "Antares", from March 26th to May 10th 2007, in the oceanic region between the states of São Paulo and Rio Grande do Sul. For read the data the function rdradcp.m, developed by Rich Pawlowicz and available on his website (http://www.eos.ubc.ca/~rich/#RDADCP), was used. To calculate the calibration factors, alignment error (α) and sensitivity error (β) in Water Tracking and Bottom Tracking Modes, equations deduced by Joyce (1998), Pollard & Read (1989) and Trump & Marmorino (1996) were implemented in Matlab. To validate the calibration factors obtained in the processing system developed, the parameters were compared with the factors provided by CODAS (Common Ocean Data Access System, available at http://currents.soest.hawaii.edu/docs/doc/index.html), post-processing program. For the same data analyzed, the factors provided by both systems were similar. Thereafter, the values obtained were used to correct the data and finally matrices were saved with data corrected and they can be plotted. The values of volume transport of the Brazil Current (BC) were calculated using the corrected data by the two systems and proved quite close, confirming the quality of the correction of the system.
Visual context processing deficits in schizophrenia: effects of deafness and disorganization.
Horton, Heather K; Silverstein, Steven M
2011-07-01
Visual illusions allow for strong tests of perceptual functioning. Perceptual impairments can produce superior task performance on certain tasks (i.e., more veridical perception), thereby avoiding generalized deficit confounds while tapping mechanisms that are largely outside of conscious control. Using a task based on the Ebbinghaus illusion, a perceptual phenomenon where the perceived size of a central target object is affected by the size of surrounding inducers, we tested hypotheses related to visual integration in deaf (n = 31) and hearing (n = 34) patients with schizophrenia. In past studies, psychiatrically healthy samples displayed increased visual integration relative to schizophrenia samples and thus were less able to correctly judge target sizes. Deafness, and especially the use of sign language, leads to heightened sensitivity to peripheral visual cues and increased sensitivity to visual context. Therefore, relative to hearing subjects, deaf subjects were expected to display increased context sensitivity (ie, a more normal illusion effect as evidenced by a decreased ability to correctly judge central target sizes). Confirming the hypothesis, deaf signers were significantly more sensitive to the illusion than nonsigning hearing patients. Moreover, an earlier age of sign language acquisition, higher levels of linguistic ability, and shorter illness duration were significantly related to increased context sensitivity. As predicted, disorganization was associated with reduced context sensitivity for all subjects. The primary implications of these data are that perceptual organization impairment in schizophrenia is plastic and that it is related to a broader failure in coordinating cognitive activity.
ERIC Educational Resources Information Center
Menenti, Laura; Petersson, Karl Magnus; Scheeringa, Rene; Hagoort, Peter
2009-01-01
Both local discourse and world knowledge are known to influence sentence processing. We investigated how these two sources of information conspire in language comprehension. Two types of critical sentences, correct and world knowledge anomalies, were preceded by either a neutral or a local context. The latter made the world knowledge anomalies…
Developmental model of static allometry in holometabolous insects.
Shingleton, Alexander W; Mirth, Christen K; Bates, Peter W
2008-08-22
The regulation of static allometry is a fundamental developmental process, yet little is understood of the mechanisms that ensure organs scale correctly across a range of body sizes. Recent studies have revealed the physiological and genetic mechanisms that control nutritional variation in the final body and organ size in holometabolous insects. The implications these mechanisms have for the regulation of static allometry is, however, unknown. Here, we formulate a mathematical description of the nutritional control of body and organ size in Drosophila melanogaster and use it to explore how the developmental regulators of size influence static allometry. The model suggests that the slope of nutritional static allometries, the 'allometric coefficient', is controlled by the relative sensitivity of an organ's growth rate to changes in nutrition, and the relative duration of development when nutrition affects an organ's final size. The model also predicts that, in order to maintain correct scaling, sensitivity to changes in nutrition varies among organs, and within organs through time. We present experimental data that support these predictions. By revealing how specific physiological and genetic regulators of size influence allometry, the model serves to identify developmental processes upon which evolution may act to alter scaling relationships.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Mcclain, Charles R.; Comiso, Josefino C.; Fraser, Robert S.; Firestone, James K.; Schieber, Brian D.; Yeh, Eueng-Nan; Arrigo, Kevin R.; Sullivan, Cornelius W.
1994-01-01
Although the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Calibration and Validation Program relies on the scientific community for the collection of bio-optical and atmospheric correction data as well as for algorithm development, it does have the responsibility for evaluating and comparing the algorithms and for ensuring that the algorithms are properly implemented within the SeaWiFS Data Processing System. This report consists of a series of sensitivity and algorithm (bio-optical, atmospheric correction, and quality control) studies based on Coastal Zone Color Scanner (CZCS) and historical ancillary data undertaken to assist in the development of SeaWiFS specific applications needed for the proper execution of that responsibility. The topics presented are as follows: (1) CZCS bio-optical algorithm comparison, (2) SeaWiFS ozone data analysis study, (3) SeaWiFS pressure and oxygen absorption study, (4) pixel-by-pixel pressure and ozone correction study for ocean color imagery, (5) CZCS overlapping scenes study, (6) a comparison of CZCS and in situ pigment concentrations in the Southern Ocean, (7) the generation of ancillary data climatologies, (8) CZCS sensor ringing mask comparison, and (9) sun glint flag sensitivity study.
El Hadri, Hind; Petersen, Elijah J.; Winchester, Michael R.
2016-01-01
The effect of ICP-MS instrument sensitivity drift on the accuracy of NP size measurements using single particle (sp)ICP-MS is investigated. Theoretical modeling and experimental measurements of the impact of instrument sensitivity drift are in agreement and indicate that drift can impact the measured size of spherical NPs by up to 25 %. Given this substantial bias in the measured size, a method was developed using an internal standard to correct for the impact of drift and was shown to accurately correct for a decrease in instrument sensitivity of up to 50 % for 30 nm and 60 nm gold nanoparticles. PMID:26894759
An Online Tilt Estimation and Compensation Algorithm for a Small Satellite Camera
NASA Astrophysics Data System (ADS)
Lee, Da-Hyun; Hwang, Jai-hyuk
2018-04-01
In the case of a satellite camera designed to execute an Earth observation mission, even after a pre-launch precision alignment process has been carried out, misalignment will occur due to external factors during the launch and in the operating environment. In particular, for high-resolution satellite cameras, which require submicron accuracy for alignment between optical components, misalignment is a major cause of image quality degradation. To compensate for this, most high-resolution satellite cameras undergo a precise realignment process called refocusing before and during the operation process. However, conventional Earth observation satellites only execute refocusing upon de-space. Thus, in this paper, an online tilt estimation and compensation algorithm that can be utilized after de-space correction is executed. Although the sensitivity of the optical performance degradation due to the misalignment is highest in de-space, the MTF can be additionally increased by correcting tilt after refocusing. The algorithm proposed in this research can be used to estimate the amount of tilt that occurs by taking star images, and it can also be used to carry out automatic tilt corrections by employing a compensation mechanism that gives angular motion to the secondary mirror. Crucially, this algorithm is developed using an online processing system so that it can operate without communication with the ground.
The language of arithmetic across the hemispheres: An event-related potential investigation.
Dickson, Danielle S; Federmeier, Kara D
2017-05-01
Arithmetic expressions, like verbal sentences, incrementally lead readers to anticipate potential appropriate completions. Existing work in the language domain has helped us understand how the two hemispheres differently participate in and contribute to the cognitive process of sentence reading, but comparatively little work has been done with mathematical equation processing. In this study, we address this gap by examining the ERP response to provided answers to simple multiplication problems, which varied both in levels of correctness (given an equation context) and in visual field of presentation (joint attention in central presentation, or biased processing to the left or right hemisphere through contralateral visual field presentation). When answers were presented to any of the visual fields (hemispheres), there was an effect of correctness prior to the traditional N400 timewindow, which we interpret as a P300 in response to a detected target item (the correct answer). In addition to this response, equation answers also elicited a late positive complex (LPC) for incorrect answers. Notably, this LPC effect was most prominent in the left visual field (right hemisphere), and it was also sensitive to the confusability of the wrong answer - incorrect answers that were closely related to the correct answer elicited a smaller LPC. This suggests a special, prolonged role for the right hemisphere during answer evaluation. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilpatrick, Brian M.; Tucker, Gregory S.; Lewis, Nikole K.
2017-01-01
We measure the 4.5 μ m thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope . Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for themore » intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.« less
NASA Astrophysics Data System (ADS)
Kilpatrick, Brian M.; Lewis, Nikole K.; Kataria, Tiffany; Deming, Drake; Ingalls, James G.; Krick, Jessica E.; Tucker, Gregory S.
2017-01-01
We measure the 4.5 μm thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for the intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.
Harrison, Tondi M
2013-01-01
Explore relationships among autonomic nervous system (ANS) function, child behavior, and maternal sensitivity in three-year-old children with surgically corrected transposition of the great arteries (TGA) and in children healthy at birth. Children surviving complex congenital heart defects are at risk for behavior problems. ANS function is associated with behavior and with maternal sensitivity. Child ANS function (heart rate variability) and maternal sensitivity (Parent-Child Early Relational Assessment) were measured during a challenging task. Mother completed the Child Behavior Checklist. Data were analyzed descriptively and graphically. Children with TGA had less responsive autonomic function and more behavior problems than healthy children. Autonomic function improved with more maternal sensitivity. Alterations in ANS function may continue years after surgical correction in children with TGA, potentially impacting behavioral regulation. Maternal sensitivity may be associated with ANS function in this population. Continued research on relationships among ANS function, child behavior, and maternal sensitivity is warranted. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tooker, Brian C.; Brindley, Stephen M.; Chiarappa-Zucca, Marina L.
We report that exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstratemore » that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, we determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) then HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.« less
New Swift UVOT data reduction tools and AGN variability studies
NASA Astrophysics Data System (ADS)
Gelbord, Jonathan; Edelson, Rick
2017-08-01
The efficient slewing and flexible scheduling of the Swift observatory have made it possible to conduct monitoring campaigns that are both intensive and prolonged, with multiple visits per day sustained over weeks and months. Recent Swift monitoring campaigns of a handful of AGN provide simultaneous optical, UV and X-ray light curves that can be used to measure variability and interband correlations on timescales from hours to months, providing new constraints for the structures within AGN and the relationships between them. However, the first of these campaigns, thrice-per-day observations of NGC 5548 through four months, revealed anomalous dropouts in the UVOT light curves (Edelson, Gelbord, et al. 2015). We identified the cause as localized regions of reduced detector sensitivity that are not corrected by standard processing. Properly interpreting the light curves required identifying and screening out the affected measurements.We are now using archival Swift data to better characterize these low sensitivity regions. Our immediate goal is to produce a more complete mapping of their locations so that affected measurements can be identified and screened before further analysis. Our longer-term goal is to build a more quantitative model of the effect in order to define a correction for measured fluxes, if possible, or at least to put limits on the impact upon any observation. We will combine data from numerous background stars in well-monitored fields in order to quantify the strength of the effect as a function of filter as well as location on the detector, and to test for other dependencies such as evolution over time or sensitivity to the count rate of the target. Our UVOT sensitivity maps and any correction tools will be provided to the community of Swift users.
Neurobiological correlates of emotional intelligence in voice and face perception networks
Karle, Kathrin N; Ethofer, Thomas; Jacob, Heike; Brück, Carolin; Erb, Michael; Lotze, Martin; Nizielski, Sophia; Schütz, Astrid; Wildgruber, Dirk; Kreifelts, Benjamin
2018-01-01
Abstract Facial expressions and voice modulations are among the most important communicational signals to convey emotional information. The ability to correctly interpret this information is highly relevant for successful social interaction and represents an integral component of emotional competencies that have been conceptualized under the term emotional intelligence. Here, we investigated the relationship of emotional intelligence as measured with the Salovey-Caruso-Emotional-Intelligence-Test (MSCEIT) with cerebral voice and face processing using functional and structural magnetic resonance imaging. MSCEIT scores were positively correlated with increased voice-sensitivity and gray matter volume of the insula accompanied by voice-sensitivity enhanced connectivity between the insula and the temporal voice area, indicating generally increased salience of voices. Conversely, in the face processing system, higher MSCEIT scores were associated with decreased face-sensitivity and gray matter volume of the fusiform face area. Taken together, these findings point to an alteration in the balance of cerebral voice and face processing systems in the form of an attenuated face-vs-voice bias as one potential factor underpinning emotional intelligence. PMID:29365199
Neurobiological correlates of emotional intelligence in voice and face perception networks.
Karle, Kathrin N; Ethofer, Thomas; Jacob, Heike; Brück, Carolin; Erb, Michael; Lotze, Martin; Nizielski, Sophia; Schütz, Astrid; Wildgruber, Dirk; Kreifelts, Benjamin
2018-02-01
Facial expressions and voice modulations are among the most important communicational signals to convey emotional information. The ability to correctly interpret this information is highly relevant for successful social interaction and represents an integral component of emotional competencies that have been conceptualized under the term emotional intelligence. Here, we investigated the relationship of emotional intelligence as measured with the Salovey-Caruso-Emotional-Intelligence-Test (MSCEIT) with cerebral voice and face processing using functional and structural magnetic resonance imaging. MSCEIT scores were positively correlated with increased voice-sensitivity and gray matter volume of the insula accompanied by voice-sensitivity enhanced connectivity between the insula and the temporal voice area, indicating generally increased salience of voices. Conversely, in the face processing system, higher MSCEIT scores were associated with decreased face-sensitivity and gray matter volume of the fusiform face area. Taken together, these findings point to an alteration in the balance of cerebral voice and face processing systems in the form of an attenuated face-vs-voice bias as one potential factor underpinning emotional intelligence.
Pulse-height loss in the signal readout circuit of compound semiconductor detectors
NASA Astrophysics Data System (ADS)
Nakhostin, M.; Hitomi, K.
2018-06-01
Compound semiconductor detectors such as CdTe, CdZnTe, HgI2 and TlBr are known to exhibit large variations in their charge collection times. This paper considers the effect of such variations on the measurement of induced charge pulses by using resistive feedback charge-sensitive preamplifiers. It is shown that, due to the finite decay-time constant of the preamplifiers, the capacitive decay during the signal readout leads to a variable deficit in the measurement of ballistic signals and a digital pulse processing method is employed to correct for it. The method is experimentally examined by using sampled pulses from a TlBr detector coupled to a charge-sensitive preamplifier with 150 μs of decay-time constant and 20 % improvement in the energy resolution of the detector at 662 keV is achieved. The implications of the capacitive decay on the correction of charge-trapping effect by using depth-sensing technique are also considered.
Air density correction in ionization dosimetry.
Christ, G; Dohm, O S; Schüle, E; Gaupp, S; Martin, M
2004-05-21
Air density must be taken into account when ionization dosimetry is performed with unsealed ionization chambers. The German dosimetry protocol DIN 6800-2 states an air density correction factor for which current barometric pressure and temperature and their reference values must be known. It also states that differences between air density and the attendant reference value, as well as changes in ionization chamber sensitivity, can be determined using a radioactive check source. Both methods have advantages and drawbacks which the paper discusses in detail. Barometric pressure at a given height above sea level can be determined by using a suitable barometer, or data downloaded from airport or weather service internet sites. The main focus of the paper is to show how barometric data from measurement or from the internet are correctly processed. Therefore the paper also provides all the requisite equations and terminological explanations. Computed and measured barometric pressure readings are compared, and long-term experience with air density correction factors obtained using both methods is described.
2014-01-01
Background National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. Methods The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18–65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. Results All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23–28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. Conclusions If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association. PMID:24885210
Walker, Alexandra J; Batchelor, Jennifer; Shores, E Arthur; Jones, Mike
2009-11-01
Despite the sensitivity of neuropsychological tests to educational level, improved diagnostic accuracy for demographically corrected scores has yet to be established. Diagnostic efficiency statistics of Wechsler Adult Intelligence Scale-III (WAIS-III) and Wechsler Memory Scale-III (WMS-III) indices that were corrected for education, sex, and age (demographically corrected) were compared with age corrected indices in individuals aged 16 to 75 years with moderate to severe traumatic brain injury (TBI) and 12 years or less education. TBI participants (n = 100) were consecutive referrals to an outpatient rehabilitation service and met careful selection criteria. Controls (n = 100) were obtained from the WAIS-III/WMS-III standardization sample. Demographically corrected indices did not provide higher diagnostic efficiency than age corrected indices and this result was supported by reanalysis of the TBI group against a larger and unmatched control group. Processing Speed Index provided comparable diagnostic accuracy to that of combined indices. Demographically corrected indices were associated with higher cut-scores to maximize overall classification, reflecting the upward adjustment of those scores in a lower education sample. This suggests that, in clinical practice, the test results of individuals with limited education may be more accurately interpreted with the application of demographic corrections. Diagnostic efficiency statistics are presented, and future research directions are discussed.
McCabe, Ciara; Rocha-Rego, Vanessa
2016-01-01
Dysfunctional neural responses to appetitive and aversive stimuli have been investigated as possible biomarkers for psychiatric disorders. However it is not clear to what degree these are separate processes across the brain or in fact overlapping systems. To help clarify this issue we used Gaussian process classifier (GPC) analysis to examine appetitive and aversive processing in the brain. 25 healthy controls underwent functional MRI whilst seeing pictures and receiving tastes of pleasant and unpleasant food. We applied GPCs to discriminate between the appetitive and aversive sights and tastes using functional activity patterns. The diagnostic accuracy of the GPC for the accuracy to discriminate appetitive taste from neutral condition was 86.5% (specificity = 81%, sensitivity = 92%, p = 0.001). If a participant experienced neutral taste stimuli the probability of correct classification was 92. The accuracy to discriminate aversive from neutral taste stimuli was 82.5% (specificity = 73%, sensitivity = 92%, p = 0.001) and appetitive from aversive taste stimuli was 73% (specificity = 77%, sensitivity = 69%, p = 0.001). In the sight modality, the accuracy to discriminate appetitive from neutral condition was 88.5% (specificity = 85%, sensitivity = 92%, p = 0.001), to discriminate aversive from neutral sight stimuli was 92% (specificity = 92%, sensitivity = 92%, p = 0.001), and to discriminate aversive from appetitive sight stimuli was 63.5% (specificity = 73%, sensitivity = 54%, p = 0.009). Our results demonstrate the predictive value of neurofunctional data in discriminating emotional and neutral networks of activity in the healthy human brain. It would be of interest to use pattern recognition techniques and fMRI to examine network dysfunction in the processing of appetitive, aversive and neutral stimuli in psychiatric disorders. Especially where problems with reward and punishment processing have been implicated in the pathophysiology of the disorder.
In-situ biogas upgrading process: Modeling and simulations aspects.
Lovato, Giovanna; Alvarado-Morales, Merlin; Kovalovszki, Adam; Peprah, Maria; Kougias, Panagiotis G; Rodrigues, José Alberto Domingues; Angelidaki, Irini
2017-12-01
Biogas upgrading processes by in-situ hydrogen (H 2 ) injection are still challenging and could benefit from a mathematical model to predict system performance. Therefore, a previous model on anaerobic digestion was updated and expanded to include the effect of H 2 injection into the liquid phase of a fermenter with the aim of modeling and simulating these processes. This was done by including hydrogenotrophic methanogen kinetics for H 2 consumption and inhibition effect on the acetogenic steps. Special attention was paid to gas to liquid transfer of H 2 . The final model was successfully validated considering a set of Case Studies. Biogas composition and H 2 utilization were correctly predicted, with overall deviation below 10% compared to experimental measurements. Parameter sensitivity analysis revealed that the model is highly sensitive to the H 2 injection rate and mass transfer coefficient. The model developed is an effective tool for predicting process performance in scenarios with biogas upgrading. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tsai, Tsung-Heng; Tsai, Hao-Cheng; Wu, Tien-Keng
2014-10-01
This paper presents a capacitive tactile sensor fabricated in a standard CMOS process. Both of the sensor and readout circuits are integrated on a single chip by a TSMC 0.35 μm CMOS MEMS technology. In order to improve the sensitivity, a T-shaped protrusion is proposed and implemented. This sensor comprises the metal layer and the dielectric layer without extra thin film deposition, and can be completed with few post-processing steps. By a nano-indenter, the measured spring constant of the T-shaped structure is 2.19 kNewton/m. Fully differential correlated double sampling capacitor-to-voltage converter (CDS-CVC) and reference capacitor correction are utilized to compensate process variations and improve the accuracy of the readout circuits. The measured displacement-to-voltage transductance is 7.15 mV/nm, and the sensitivity is 3.26 mV/μNewton. The overall power dissipation is 132.8 μW.
Correcting the MoCA for education: effect on sensitivity.
Gagnon, Genevieve; Hansen, Kevin T; Woolmore-Goodwin, Sarah; Gutmanis, Iris; Wells, Jennie; Borrie, Michael; Fogarty, Jennifer
2013-09-01
The goal of this study was to quantify the impact of the suggested education correction on the sensitivity and specificity of the Montreal Cognitive Assessment (MoCA). Twenty-five outpatients with dementia and 39 with amnestic mild cognitive impairment (aMCI) underwent a diagnostic evaluation, which included the MoCA. Thirty-seven healthy controls also completed the MoCA and psychiatric, medical, neurological, functional, and cognitive difficulties were ruled out. For the total MoCA score, unadjusted for education, a cut-off score of 26 yielded the best balance between sensitivity and specificity (80% and 89% respectively) in identifying cognitive impairment (people with either dementia or aMCI, versus controls). When applying the education correction, sensitivity decreased from 80% to 69% for a small specificity increase (89% to 92%). The cut-off score yielding the best balance between sensitivity and specificity for the education adjusted MoCA score fell to 25 (61% and 97%, respectively). Adjusting the MoCA total score for education had a detrimental effect on sensitivity with only a slight increase in specificity. Clinically, this loss in sensitivity can lead to an increased number of false negatives, as education level does not always correlate to premorbid intellectual function. Clinical judgment about premorbid status should guide interpretation. However, as this effect may be cohort specific, age and education corrected norms and cut-offs should be developed to help guide MoCA interpretation.
Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.
Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas
2016-01-01
While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.
Santin, Gaëlle; Geoffroy, Béatrice; Bénézet, Laetitia; Delézire, Pauline; Chatelot, Juliette; Sitta, Rémi; Bouyer, Jean; Gueguen, Alice
2014-06-01
To show how reweighting can correct for unit nonresponse bias in an occupational health surveillance survey by using data from administrative databases in addition to classic sociodemographic data. In 2010, about 10,000 workers covered by a French health insurance fund were randomly selected and were sent a postal questionnaire. Simultaneously, auxiliary data from routine health insurance and occupational databases were collected for all these workers. To model the probability of response to the questionnaire, logistic regressions were performed with these auxiliary data to compute weights for correcting unit nonresponse. Corrected prevalences of questionnaire variables were estimated under several assumptions regarding the missing data process. The impact of reweighting was evaluated by a sensitivity analysis. Respondents had more reimbursement claims for medical services than nonrespondents but fewer reimbursements for medical prescriptions or hospitalizations. Salaried workers, workers in service companies, or who had held their job longer than 6 months were more likely to respond. Corrected prevalences after reweighting were slightly different from crude prevalences for some variables but meaningfully different for others. Linking health insurance and occupational data effectively corrects for nonresponse bias using reweighting techniques. Sociodemographic variables may be not sufficient to correct for nonresponse. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Butler, C. F.
1979-01-01
A computer sensitivity analysis was performed to determine the uncertainties involved in the calculation of volcanic aerosol dispersion in the stratosphere using a 2 dimensional model. The Fuego volcanic event of 1974 was used. Aerosol dispersion processes that were included are: transport, sedimentation, gas phase sulfur chemistry, and aerosol growth. Calculated uncertainties are established from variations in the stratospheric aerosol layer decay times at 37 latitude for each dispersion process. Model profiles are also compared with lidar measurements. Results of the computer study are quite sensitive (factor of 2) to the assumed volcanic aerosol source function and the large variations in the parameterized transport between 15 and 20 km at subtropical latitudes. Sedimentation effects are uncertain by up to a factor of 1.5 because of the lack of aerosol size distribution data. The aerosol chemistry and growth, assuming that the stated mechanisms are correct, are essentially complete in several months after the eruption and cannot explain the differences between measured and modeled results.
The impact of recreational MDMA 'ecstasy' use on global form processing.
White, Claire; Edwards, Mark; Brown, John; Bell, Jason
2014-11-01
The ability to integrate local orientation information into a global form percept was investigated in long-term ecstasy users. Evidence suggests that ecstasy disrupts the serotonin system, with the visual areas of the brain being particularly susceptible. Previous research has found altered orientation processing in the primary visual area (V1) of users, thought to be due to disrupted serotonin-mediated lateral inhibition. The current study aimed to investigate whether orientation deficits extend to higher visual areas involved in global form processing. Forty-five participants completed a psychophysical (Glass pattern) study allowing an investigation into the mechanisms underlying global form processing and sensitivity to changes in the offset of the stimuli (jitter). A subgroup of polydrug-ecstasy users (n=6) with high ecstasy use had significantly higher thresholds for the detection of Glass patterns than controls (n=21, p=0.039) after Bonferroni correction. There was also a significant interaction between jitter level and drug-group, with polydrug-ecstasy users showing reduced sensitivity to alterations in jitter level (p=0.003). These results extend previous research, suggesting disrupted global form processing and reduced sensitivity to orientation jitter with ecstasy use. Further research is needed to investigate this finding in a larger sample of heavy ecstasy users and to differentiate the effects of other drugs. © The Author(s) 2014.
Method and apparatus for optical phase error correction
DeRose, Christopher; Bender, Daniel A.
2014-09-02
The phase value of a phase-sensitive optical device, which includes an optical transport region, is modified by laser processing. At least a portion of the optical transport region is exposed to a laser beam such that the phase value is changed from a first phase value to a second phase value, where the second phase value is different from the first phase value. The portion of the optical transport region that is exposed to the laser beam can be a surface of the optical transport region or a portion of the volume of the optical transport region. In an embodiment of the invention, the phase value of the optical device is corrected by laser processing. At least a portion of the optical transport region is exposed to a laser beam until the phase value of the optical device is within a specified tolerance of a target phase value.
Gong, Rui; Yang, Bi; Liu, Longqian; Dai, Yun; Zhang, Yudong; Zhao, Haoxin
2016-06-01
We conducted this study to explore the influence of the ocular residual aberrations changes on contrast sensitivity(CS)function in eyes undergoing orthokeratology using adaptive optics technique.Nineteen subjects’ nineteen eyes were included in this study.The subjects were between 12 and 20years(14.27±2.23years)of age.An adaptive optics(AO)system was adopted to measure and compensate the residual aberrations through a 4-mm artificial pupil,and at the same time the contrast sensitivities were measured at five spatial frequencies(2,4,8,16,and 32 cycles per degree).The CS measurements with and without AO correction were completed.The sequence of the measurements with and without AO correction was randomly arranged without informing the observers.A two-interval forced-choice procedure was used for the CS measurements.The paired t-test was used to compare the contrast sensitivity with and without AO correction at each spatial frequency.The results revealed that the AO system decreased the mean total root mean square(RMS)from 0.356μm to 0.160μm(t=10.517,P<0.001),and the mean total higher-order RMS from 0.246μm to 0.095μm(t=10.113,P<0.001).The difference in log contrast sensitivity with and without AO correction was significant only at 8cpd(t=-2.51,P=0.02).Thereby we concluded that correcting the ocular residual aberrations using adaptive optics technique could improve the contrast sensitivity function at intermediate spatial frequency in patients undergoing orthokeratology.
NASA Astrophysics Data System (ADS)
Li, Zhe; Chang, Wenhan; Gao, Chengchen; Hao, Yilong
2018-04-01
In this paper, a novel five-wire micro-fabricated anemometer with 3D directionality based on calorimetric principle is proposed, which is capable of measuring low speed airflow. This structure is realized by vertically bonding two different dies, which can be fabricated on the same wafer resulting in a simple fabrication process. Experiments on speed lower than 200 mm s-1 are conducted, showing good repeatability and directionality. The speed of airflow is controlled by the volumetric flow rate. The measured velocity sensitivity is 9.4 mV · s m-1, with relative direction sensitivity of 37.1 dB. The deviation between the expected and the measured directivity is analyzed by both theories and simulations. A correction procedure is proposed and turns out to be useful to eliminate this deviation. To further explore the potential of our device, we expose it to acoustic plane waves in a standing wave tube, showing consistent planar directivity of figure of eight. The measured velocity sensitivity at 1 kHz and 120 dBC is 4.4 mV · s m-1, with relative direction sensitivity of 27.0 dB. By using the correction method proposed above, the maximum angle error is about ±2°, showing its good directionality accuracy.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2009-10-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2010-02-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
Atmospheric correction for hyperspectral ocean color sensors
NASA Astrophysics Data System (ADS)
Ibrahim, A.; Ahmad, Z.; Franz, B. A.; Knobelspiesse, K. D.
2017-12-01
NASA's heritage Atmospheric Correction (AC) algorithm for multi-spectral ocean color sensors is inadequate for the new generation of spaceborne hyperspectral sensors, such as NASA's first hyperspectral Ocean Color Instrument (OCI) onboard the anticipated Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite mission. The AC process must estimate and remove the atmospheric path radiance contribution due to the Rayleigh scattering by air molecules and by aerosols from the measured top-of-atmosphere (TOA) radiance. Further, it must also compensate for the absorption by atmospheric gases and correct for reflection and refraction of the air-sea interface. We present and evaluate an improved AC for hyperspectral sensors beyond the heritage approach by utilizing the additional spectral information of the hyperspectral sensor. The study encompasses a theoretical radiative transfer sensitivity analysis as well as a practical application of the Hyperspectral Imager for the Coastal Ocean (HICO) and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensors.
New readout integrated circuit using continuous time fixed pattern noise correction
NASA Astrophysics Data System (ADS)
Dupont, Bertrand; Chammings, G.; Rapellin, G.; Mandier, C.; Tchagaspanian, M.; Dupont, Benoit; Peizerat, A.; Yon, J. J.
2008-04-01
LETI has been involved in IRFPA development since 1978; the design department (LETI/DCIS) has focused its work on new ROIC architecture since many years. The trend is to integrate advanced functions into the CMOS design to achieve cost efficient sensors production. Thermal imaging market is today more and more demanding of systems with instant ON capability and low power consumption. The purpose of this paper is to present the latest developments of fixed pattern noise continuous time correction. Several architectures are proposed, some are based on hardwired digital processing and some are purely analog. Both are using scene based algorithms. Moreover a new method is proposed for simultaneous correction of pixel offsets and sensitivities. In this scope, a new architecture of readout integrated circuit has been implemented; this architecture is developed with 0.18μm CMOS technology. The specification and the application of the ROIC are discussed in details.
Context sensitivity and ambiguity in component-based systems design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bespalko, S.J.; Sindt, A.
1997-10-01
Designers of components-based, real-time systems need to guarantee to correctness of soft-ware and its output. Complexity of a system, and thus the propensity for error, is best characterized by the number of states a component can encounter. In many cases, large numbers of states arise where the processing is highly dependent on context. In these cases, states are often missed, leading to errors. The following are proposals for compactly specifying system states which allow the factoring of complex components into a control module and a semantic processing module. Further, the need for methods that allow for the explicit representation ofmore » ambiguity and uncertainty in the design of components is discussed. Presented herein are examples of real-world problems which are highly context-sensitive or are inherently ambiguous.« less
Tooker, Brian C.; Brindley, Stephen M.; Chiarappa-Zucca, Marina L.; ...
2014-06-16
We report that exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstratemore » that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, we determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) then HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.« less
Development of a Pressure Sensitive Paint System with Correction for Temperature Variation
NASA Technical Reports Server (NTRS)
Simmons, Kantis A.
1995-01-01
Pressure Sensitive Paint (PSP) is known to provide a global image of pressure over a model surface. However, improvements in its accuracy and reliability are needed. Several factors contribute to the inaccuracy of PSP. One major factor is that luminescence is temperature dependent. To correct the luminescence of the pressure sensing component for changes in temperature, a temperature sensitive luminophore incorporated in the paint allows the user to measure both pressure and temperature simultaneously on the surface of a model. Magnesium Octaethylporphine (MgOEP) was used as a temperature sensing luminophore, with the pressure sensing luminophore, Platinum Octaethylporphine (PtOEP), to correct for temperature variations in model surface pressure measurements.
Motion Correction in PROPELLER and Turboprop-MRI
Tamhane, Ashish A.; Arfanakis, Konstantinos
2009-01-01
PROPELLER and Turboprop-MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo and gradient and spin-echo, respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop-MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that, blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop-MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction were discussed for PROPELLER and Turboprop-MRI. PMID:19365858
ERIC Educational Resources Information Center
Burck, Andrew M.; Laux, John M.; Ritchie, Martin; Baker, David
2008-01-01
In this study, the authors examined the Substance Abuse Subtle Screening Inventory-3 Correctional scale's sensitivity and specificity at detecting college students' illegal behaviors. Sensitivity was strong, but specificity was weak. Implications for counseling and suggestions for future research are included. (Contains 3 tables.)
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
Optical and neural anisotropy in peripheral vision
Zheleznyak, Len; Barbot, Antoine; Ghosh, Atanu; Yoon, Geunyoung
2016-01-01
Optical blur in the peripheral retina is known to be highly anisotropic due to nonrotationally symmetric wavefront aberrations such as astigmatism and coma. At the neural level, the visual system exhibits anisotropies in orientation sensitivity across the visual field. In the fovea, the visual system shows higher sensitivity for cardinal over diagonal orientations, which is referred to as the oblique effect. However, in the peripheral retina, the neural visual system becomes more sensitive to radially-oriented signals, a phenomenon known as the meridional effect. Here, we examined the relative contributions of optics and neural processing to the meridional effect in 10 participants at 0°, 10°, and 20° in the temporal retina. Optical anisotropy was quantified by measuring the eye's habitual wavefront aberrations. Alternatively, neural anisotropy was evaluated by measuring contrast sensitivity (at 2 and 4 cyc/deg) while correcting the eye's aberrations with an adaptive optics vision simulator, thus bypassing any optical factors. As eccentricity increased, optical and neural anisotropy increased in magnitude. The average ratio of horizontal to vertical optical MTF (at 2 and 4 cyc/deg) at 0°, 10°, and 20° was 0.96 ± 0.14, 1.41 ± 0.54 and 2.15 ± 1.38, respectively. Similarly, the average ratio of horizontal to vertical contrast sensitivity with full optical correction at 0°, 10°, and 20° was 0.99 ± 0.15, 1.28 ± 0.28 and 1.75 ± 0.80, respectively. These results indicate that the neural system's orientation sensitivity coincides with habitual blur orientation. These findings support the neural origin of the meridional effect and raise important questions regarding the role of peripheral anisotropic optical quality in developing the meridional effect and emmetropization. PMID:26928220
Richardson, Michael L; Petscavage, Jonelle M
2011-11-01
The sensitivity and specificity of magnetic resonance imaging (MRI) for diagnosis of meniscal tears has been studied extensively, with tears usually verified by surgery. However, surgically unverified cases are often not considered in these studies, leading to verification bias, which can falsely increase the sensitivity and decrease the specificity estimates. Our study suggests that such bias may be very common in the meniscal MRI literature, and illustrates techniques to detect and correct for such bias. PubMed was searched for articles estimating sensitivity and specificity of MRI for meniscal tears. These were assessed for verification bias, deemed potentially present if a study included any patients whose MRI findings were not surgically verified. Retrospective global sensitivity analysis (GSA) was performed when possible. Thirty-nine of the 314 studies retrieved from PubMed specifically dealt with meniscal tears. All 39 included unverified patients, and hence, potential verification bias. Only seven articles included sufficient information to perform GSA. Of these, one showed definite verification bias, two showed no bias, and four others showed bias within certain ranges of disease prevalence. Only 9 of 39 acknowledged the possibility of verification bias. Verification bias is underrecognized and potentially common in published estimates of the sensitivity and specificity of MRI for the diagnosis of meniscal tears. When possible, it should be avoided by proper study design. If unavoidable, it should be acknowledged. Investigators should tabulate unverified as well as verified data. Finally, verification bias should be estimated; if present, corrected estimates of sensitivity and specificity should be used. Our online web-based calculator makes this process relatively easy. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
Han, Hyemin; Glenn, Andrea L
2018-06-01
In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Hyunjo, E-mail: hjjeong@wku.ac.kr; Cho, Sungjong; Zhang, Shuzeng
2016-04-15
In recent studies with nonlinear Rayleigh surface waves, harmonic generation measurements have been successfully employed to characterize material damage and microstructural changes, and found to be sensitive to early stages of damage process. A nonlinearity parameter of Rayleigh surface waves was derived and frequently measured to quantify the level of damage. The accurate measurement of the nonlinearity parameter generally requires making corrections for beam diffraction and medium attenuation. These effects are not generally known for nonlinear Rayleigh waves, and therefore not properly considered in most of previous studies. In this paper, the nonlinearity parameter for a Rayleigh surface wave ismore » defined from the plane wave displacement solutions. We explicitly define the attenuation and diffraction corrections for fundamental and second harmonic Rayleigh wave beams radiated from a uniform line source. Attenuation corrections are obtained from the quasilinear theory of plane Rayleigh wave equations. To obtain closed-form expressions for diffraction corrections, multi-Gaussian beam (MGB) models are employed to represent the integral solutions derived from the quasilinear theory of the full two-dimensional wave equation without parabolic approximation. Diffraction corrections are presented for a couple of transmitter-receiver geometries, and the effects of making attenuation and diffraction corrections are examined through the simulation of nonlinearity parameter determination in a solid sample.« less
NASA Astrophysics Data System (ADS)
Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal
2013-10-01
Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.
Simulating correction of adjustable optics for an x-ray telescope
NASA Astrophysics Data System (ADS)
Aldcroft, Thomas L.; Schwartz, Daniel A.; Reid, Paul B.; Cotroneo, Vincenzo; Davis, William N.
2012-10-01
The next generation of large X-ray telescopes with sub-arcsecond resolution will require very thin, highly nested grazing incidence optics. To correct the low order figure errors resulting from initial manufacture, the mounting process, and the effects of going from 1 g during ground alignment to zero g on-orbit, we plan to adjust the shapes via piezoelectric "cells" deposited on the backs of the reflecting surfaces. This presentation investigates how well the corrections might be made. We take a benchmark conical glass element, 410×205 mm, with a 20×20 array of piezoelectric cells 19×9 mm in size. We use finite element analysis to calculate the influence function of each cell. We then simulate the correction via pseudo matrix inversion to calculate the stress to be applied by each cell, considering distortion due to gravity as calculated by finite element analysis, and by putative low order manufacturing distortions described by Legendre polynomials. We describe our algorithm and its performance, and the implications for the sensitivity of the resulting slope errors to the optimization strategy.
NASA Astrophysics Data System (ADS)
Biedermann, Benedikt; Denner, Ansgar; Hofer, Lars
2017-10-01
The production of a neutral and a charged vector boson with subsequent decays into three charged leptons and a neutrino is a very important process for precision tests of the Standard Model of elementary particles and in searches for anomalous triple-gauge-boson couplings. In this article, the first computation of next-to-leading-order electroweak corrections to the production of the four-lepton final states μ + μ -e+ ν e, {μ}+{μ}-{e}-{\\overline{ν}}e , μ + μ - μ + ν μ , and {μ}+{μ}-{μ}-{\\overline{ν}}_{μ } at the Large Hadron Collider is presented. We use the complete matrix elements at leading and next-to-leading order, including all off-shell effects of intermediate massive vector bosons and virtual photons. The relative electroweak corrections to the fiducial cross sections from quark-induced partonic processes vary between -3% and -6%, depending significantly on the event selection. At the level of differential distributions, we observe large negative corrections of up to -30% in the high-energy tails of distributions originating from electroweak Sudakov logarithms. Photon-induced contributions at next-to-leading order raise the leading-order fiducial cross section by +2%. Interference effects in final states with equal-flavour leptons are at the permille level for the fiducial cross section, but can lead to sizeable effects in off-shell sensitive phase-space regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric L.
The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, andmore » population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.« less
Chloride and bromide sources in water: Quantitative model use and uncertainty
NASA Astrophysics Data System (ADS)
Horner, Kyle N.; Short, Michael A.; McPhail, D. C.
2017-06-01
Dissolved chloride is a commonly used geochemical tracer in hydrological studies. Assumptions underlying many chloride-based tracer methods do not hold where processes such as halide-bearing mineral dissolution, fluid mixing, or diffusion modify dissolved Cl- concentrations. Failure to identify, quantify, or correct such processes can introduce significant uncertainty to chloride-based tracer calculations. Mass balance or isotopic techniques offer a means to address this uncertainty, however, concurrent evaporation or transpiration can complicate corrections. In this study Cl/Br ratios are used to derive equations that can be used to correct a solution's total dissolved Cl- and Br- concentration for inputs from mineral dissolution and/or binary mixing. We demonstrate the equations' applicability to waters modified by evapotranspiration. The equations can be used to quickly determine the maximum proportion of dissolved Cl- and Br- from each end-member, providing no halide-bearing minerals have precipitated and the Cl/Br ratio of each end member is known. This allows rapid evaluation of halite dissolution or binary mixing contributions to total dissolved Cl- and Br-. Equation sensitivity to heterogeneity and analytical uncertainty is demonstrated through bench-top experiments simulating halite dissolution and variable degrees of evapotranspiration, as commonly occur in arid environments. The predictions agree with the experimental results to within 6% and typically much less, with the sensitivity of the predicted results varying as a function of end-member compositions and analytical uncertainty. Finally, we present a case-study illustrating how the equations presented here can be used to quantify Cl- and Br- sources and sinks in surface water and groundwater and how the equations can be applied to constrain uncertainty in chloride-based tracer calculations.
Utility and limitations of a peptide reactivity assay to predict fragrance allergens in vitro.
Natsch, A; Gfeller, H; Rothaupt, M; Ellis, G
2007-10-01
A key step in the skin sensitization process is the formation of a covalent adduct between the skin sensitizer and endogenous proteins and/or peptides in the skin. A published peptide depletion assay was used to relate the in vitro reactivity of fragrance molecules to LLNA data. Using the classical assay, 22 of 28 tested moderate to strong sensitizers were positive. The prediction of weak sensitizers proved to be more difficult with only 50% of weak sensitizers giving a positive response, but for some compounds this could also be due to false-positive results from the LLNA. LC-MS analysis yielded the expected mass of the peptide adducts in several cases, whereas in other cases putative oxidation reactions led to adducts of unexpected molecular weight. Several moderately sensitizing aldehydes were correctly predicted by the depletion assay, but no adducts were found and the depletion appears to be due to an oxidation of the parent peptide catalyzed by the test compound. Finally, alternative test peptides derived from a physiological reactive protein with enhanced sensitivity for weak Michael acceptors were found, further increasing the sensitivity of the assay.
Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures
NASA Astrophysics Data System (ADS)
Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.
2017-12-01
Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.
Ray, N J; Fowler, S; Stein, J F
2005-04-01
The magnocellular system plays an important role in visual motion processing, controlling vergence eye movements, and in reading. Yellow filters may boost magnocellular activity by eliminating inhibitory blue input to this pathway. It was found that wearing yellow filters increased motion sensitivity, convergence, and accommodation in many children with reading difficulties, both immediately and after three months using the filters. Motion sensitivity was not increased using control neutral density filters. Moreover, reading-impaired children showed significant gains in reading ability after three months wearing the filters compared with those who had used a placebo. It was concluded that yellow filters can improve magnocellular function permanently. Hence, they should be considered as an alternative to corrective lenses, prisms, or exercises for treating poor convergence and accommodation, and also as an aid for children with reading problems.
Supercontinuum Fourier transform spectrometry with balanced detection on a single photodiode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goncharov, Vasily; Hall, Gregory
Here, we have developed phase-sensitive signal detection and processing algorithms for Fourier transform spectrometers fitted with supercontinuum sources for applications requiring ultimate sensitivity. Similar to well-established approach of source noise cancellation through balanced detection of monochromatic light, our method is capable of reducing the relative intensity noise of polychromatic light by 40 dB. Unlike conventional balanced detection, which relies on differential absorption measured with a well matched pair of photo-detectors, our algorithm utilizes phase-sensitive differential detection on a single photodiode and is capable of the real-time correction for instabilities in supercontinuum spectral structure over a broad range of wavelengths. Inmore » the resulting method is universal in terms of applicable wavelengths and compatible with commercial spectrometers. We present a proof-of-principle experimental« less
Supercontinuum Fourier transform spectrometry with balanced detection on a single photodiode
Goncharov, Vasily; Hall, Gregory
2016-08-25
Here, we have developed phase-sensitive signal detection and processing algorithms for Fourier transform spectrometers fitted with supercontinuum sources for applications requiring ultimate sensitivity. Similar to well-established approach of source noise cancellation through balanced detection of monochromatic light, our method is capable of reducing the relative intensity noise of polychromatic light by 40 dB. Unlike conventional balanced detection, which relies on differential absorption measured with a well matched pair of photo-detectors, our algorithm utilizes phase-sensitive differential detection on a single photodiode and is capable of the real-time correction for instabilities in supercontinuum spectral structure over a broad range of wavelengths. Inmore » the resulting method is universal in terms of applicable wavelengths and compatible with commercial spectrometers. We present a proof-of-principle experimental« less
Third-generation intelligent IR focal plane arrays
NASA Astrophysics Data System (ADS)
Caulfield, H. John; Jack, Michael D.; Pettijohn, Kevin L.; Schlesselmann, John D.; Norworth, Joe
1998-03-01
SBRC is at the forefront of industry in developing IR focal plane arrays including multi-spectral technology and '3rd generation' functions that mimic the human eye. 3rd generation devices conduct advanced processing on or near the FPA that serve to reduce bandwidth while performing needed functions such as automatic target recognition, uniformity correction and dynamic range enhancement. These devices represent a solution for processing the exorbitantly high bandwidth coming off large area FPAs without sacrificing systems sensitivity. SBRC's two-color approach leverages the company's HgCdTe technology to provide simultaneous multiband coverage, from short through long wave IR, with near theoretical performance. IR systems that are sensitive to different spectral bands achieve enhanced capabilities for target identification and advanced discrimination. This paper will provide a summary of the issues, the technology and the benefits of SBRC's third generation smart and two-color FPAs.
Furutani, Shunsuke; Hagihara, Yoshihisa; Nagai, Hidenori
2017-09-01
Correct labeling of foods is critical for consumers who wish to avoid a specific meat species for religious or cultural reasons. Therefore, gene-based point-of-care food analysis by real-time Polymerase Chain Reaction (PCR) is expected to contribute to the quality control in the food industry. In this study, we perform rapid identification of meat species by our portable rapid real-time PCR system, following a very simple DNA extraction method. Applying these techniques, we correctly identified beef, pork, chicken, rabbit, horse, and mutton in processed foods in 20min. Our system was sensitive enough to detect the interfusion of about 0.1% chicken egg-derived DNA in a processed food sample. Our rapid real-time PCR system is expected to contribute to the quality control in food industries because it can be applied for the identification of meat species, and future applications can expand its functionality to the detection of genetically modified organisms or mutations. Copyright © 2017 Elsevier Ltd. All rights reserved.
TH-AB-201-04: Clinical Protocol for Reuse of Optically Stimulated Luminescence Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graeper, G; Pillai, S
Purpose: OSLDs made of Al2O3:C have many useful dosimetric characteristics including their ability to be reused. The signal on an OSLD can be removed through heat or light. The objective of this study was to characterize the change in sensitivity associated with annealing OSLDs with light and in doing so define a range through which reuse is viable. Methods: Four groups of nanoDot OSLDs were repeatedly irradiated and bleached to create accumulated dose history. Each group’s repeated irradiation remained constant at either 50, 100, 200, or 500 cGy. Before both irradiation and bleaching the OSLDs were read out, giving themore » dose reading and ensuring that ample bleaching had occurred, respectively. New and used OSLDs were compared in several clinical situations to verify accuracy. One final test involved correcting the readout dose based on the loss of sensitivity seen in the accumulated dose data. Results: In the first 40 Gy of accumulated dose the sensitivity can be broken into two regions: a region of sensitivity change and a region of no sensitivity change. From 0 cGy to an average of 1080 cGy the sensitivity does not change. After 1080 cGy the sensitivity begins to decrease linearly with an average slope of 0.00456 cGy lost per cGy accumulated after the cutoff point. The slope and cutoff point were used to correct readings in the final test, reducing the error from 6.8% to 3.9%. Conclusion: In the region of no sensitivity change OSLDs can be reused without concern to the validity of their result. Readings must be corrected if OSLDs are to be used in the region of sensitivity change, above 1080 cGy. After 40 Gy OSLDs must be retired because the sensitivity change reverses, making linear correction no longer feasible.« less
Tamhane, Ashish A; Arfanakis, Konstantinos
2009-07-01
Periodically-rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) and Turboprop MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo (FSE) and gradient and spin-echo (GRASE), respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction are discussed for PROPELLER and Turboprop MRI. (c) 2009 Wiley-Liss, Inc.
Fuzzy method for pre-diagnosis of breast cancer from the Fine Needle Aspirate analysis
2012-01-01
Background Across the globe, breast cancer is one of the leading causes of death among women and, currently, Fine Needle Aspirate (FNA) with visual interpretation is the easiest and fastest biopsy technique for the diagnosis of this deadly disease. Unfortunately, the ability of this method to diagnose cancer correctly when the disease is present varies greatly, from 65% to 98%. This article introduces a method to assist in the diagnosis and second opinion of breast cancer from the analysis of descriptors extracted from smears of breast mass obtained by FNA, with the use of computational intelligence resources - in this case, fuzzy logic. Methods For data acquisition of FNA, the Wisconsin Diagnostic Breast Cancer Data (WDBC), from the University of California at Irvine (UCI) Machine Learning Repository, available on the internet through the UCI domain was used. The knowledge acquisition process was carried out by the extraction and analysis of numerical data of the WDBC and by interviews and discussions with medical experts. The PDM-FNA-Fuzzy was developed in four steps: 1) Fuzzification Stage; 2) Rules Base; 3) Inference Stage; and 4) Defuzzification Stage. Performance cross-validation was used in the tests, with three databases with gold pattern clinical cases randomly extracted from the WDBC. The final validation was held by medical specialists in pathology, mastology and general practice, and with gold pattern clinical cases, i.e. with known and clinically confirmed diagnosis. Results The Fuzzy Method developed provides breast cancer pre-diagnosis with 98.59% sensitivity (correct pre-diagnosis of malignancies); and 85.43% specificity (correct pre-diagnosis of benign cases). Due to the high sensitivity presented, these results are considered satisfactory, both by the opinion of medical specialists in the aforementioned areas and by comparison with other studies involving breast cancer diagnosis using FNA. Conclusions This paper presents an intelligent method to assist in the diagnosis and second opinion of breast cancer, using a fuzzy method capable of processing and sorting data extracted from smears of breast mass obtained by FNA, with satisfactory levels of sensitivity and specificity. The main contribution of the proposed method is the reduction of the variation hit of malignant cases when compared to visual interpretation currently applied in the diagnosis by FNA. While the MPD-FNA-Fuzzy features stable sensitivity at 98.59%, visual interpretation diagnosis provides a sensitivity variation from 65% to 98% (this track showing sensitivity levels below those considered satisfactory by medical specialists). Note that this method will be used in an Intelligent Virtual Environment to assist the decision-making (IVEMI), which amplifies its contribution. PMID:23122391
NASA Astrophysics Data System (ADS)
Ohmae, Etsuko; Nishio, Shinichiro; Oda, Motoki; Suzuki, Hiroaki; Suzuki, Toshihiko; Ohashi, Kyoichi; Koga, Shunsaku; Yamashita, Yutaka; Watanabe, Hiroshi
2014-06-01
Near-infrared spectroscopy (NIRS) has been used for noninvasive assessment of oxygenation in living tissue. For muscle measurements by NIRS, the measurement sensitivity to muscle (S) is strongly influenced by fat thickness (FT). In this study, we investigated the influence of FT and developed a correction curve for S with an optode distance (3 cm) sufficiently large to probe the muscle. First, we measured the hemoglobin concentration in the forearm (n=36) and thigh (n=6) during arterial occlusion using a time-resolved spectroscopy (TRS) system, and then FT was measured by ultrasound. The correction curve was derived from the ratio of partial mean optical path length of the muscle layer
Hsu, Shu-Hui; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Feng, Mary; Grodzki, David M.; Balter, James M.
2015-01-01
Accurate separation of air and bone is critical for creating synthetic CT from MRI to support Radiation Oncology workflow. This study compares two different ultrashort echo-time sequences in the separation of air from bone, and evaluates post-processing methods that correct intensity nonuniformity of images and account for intensity gradients at tissue boundaries to improve this discriminatory power. CT and MRI scans were acquired on 12 patients under an institution review board-approved prospective protocol. The two MRI sequences tested were ultra-short TE imaging using 3D radial acquisition (UTE), and using pointwise encoding time reduction with radial acquisition (PETRA). Gradient nonlinearity correction was applied to both MR image volumes after acquisition. MRI intensity nonuniformity was corrected by vendor-provided normalization methods, and then further corrected using the N4itk algorithm. To overcome the intensity-gradient at air-tissue boundaries, spatial dilations, from 0 to 4 mm, were applied to threshold-defined air regions from MR images. Receiver operating characteristic (ROC) analyses, by comparing predicted (defined by MR images) versus “true” regions of air and bone (defined by CT images), were performed with and without residual bias field correction and local spatial expansion. The post-processing corrections increased the areas under the ROC curves (AUC) from 0.944 ± 0.012 to 0.976 ± 0.003 for UTE images, and from 0.850 ± 0.022 to 0.887 ± 0.012 for PETRA images, compared to without corrections. When expanding the threshold-defined air volumes, as expected, sensitivity of air identification decreased with an increase in specificity of bone discrimination, but in a non-linear fashion. A 1-mm air mask expansion yielded AUC increases of 1% and 4% for UTE and PETRA images, respectively. UTE images had significantly greater discriminatory power in separating air from bone than PETRA images. Post-processing strategies improved the discriminatory power of air from bone for both UTE and PETRA images, and reduced the difference between the two imaging sequences. Both postprocessed UTE and PETRA images demonstrated sufficient power to discriminate air from bone to support synthetic CT generation from MRI data. PMID:25776205
Rosskopf, Johannes; Müller, Hans-Peter; Dreyhaupt, Jens; Gorges, Martin; Ludolph, Albert C; Kassubek, Jan
2015-03-01
Diffusion tensor imaging (DTI) for assessing ALS-associated white matter alterations has still not reached the level of a neuroimaging biomarker. Since large-scale multicentre DTI studies in ALS may be hampered by differences in scanning protocols, an approach for pooling of DTI data acquired with different protocols was investigated. Three hundred and nine datasets from 170 ALS patients and 139 controls were collected ex post facto from a monocentric database reflecting different scanning protocols. A 3D correction algorithm was introduced for a combined analysis of DTI metrics despite different acquisition protocols, with the focus on the CST as the tract correlate of ALS neuropathological stage 1. A homogenous set of data was obtained by application of 3D correction matrices. Results showed that a fractional anisotropy (FA) threshold of 0.41 could be defined to discriminate ALS patients from controls (sensitivity/specificity, 74%/72%). For the remaining test sample, sensitivity/specificity values of 68%/74% were obtained. In conclusion, the objective was to merge data recorded with different DTI protocols with 3D correction matrices for analyses at group level. These post processing tools might facilitate analysis of large study samples in a multicentre setting for DTI analysis at group level to aid in establishing DTI as a non-invasive biomarker for ALS.
Rigorous ILT optimization for advanced patterning and design-process co-optimization
NASA Astrophysics Data System (ADS)
Selinidis, Kosta; Kuechler, Bernd; Cai, Howard; Braam, Kyle; Hoppe, Wolfgang; Domnenko, Vitaly; Poonawala, Amyn; Xiao, Guangming
2018-03-01
Despite the large difficulties involved in extending 193i multiple patterning and the slow ramp of EUV lithography to full manufacturing readiness, the pace of development for new technology node variations has been accelerating. Multiple new variations of new and existing technology nodes have been introduced for a range of device applications; each variation with at least a few new process integration methods, layout constructs and/or design rules. This had led to a strong increase in the demand for predictive technology tools which can be used to quickly guide important patterning and design co-optimization decisions. In this paper, we introduce a novel hybrid predictive patterning method combining two patterning technologies which have each individually been widely used for process tuning, mask correction and process-design cooptimization. These technologies are rigorous lithography simulation and inverse lithography technology (ILT). Rigorous lithography simulation has been extensively used for process development/tuning, lithography tool user setup, photoresist hot-spot detection, photoresist-etch interaction analysis, lithography-TCAD interactions/sensitivities, source optimization and basic lithography design rule exploration. ILT has been extensively used in a range of lithographic areas including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies will therefore have a wide range of useful applications. We investigate the benefits of the new functionality for a few of these advanced applications including correction for photoresist top loss and resist scumming hotspots.
McCabe, Ciara; Rocha-Rego, Vanessa
2016-01-01
Background Dysfunctional neural responses to appetitive and aversive stimuli have been investigated as possible biomarkers for psychiatric disorders. However it is not clear to what degree these are separate processes across the brain or in fact overlapping systems. To help clarify this issue we used Gaussian process classifier (GPC) analysis to examine appetitive and aversive processing in the brain. Method 25 healthy controls underwent functional MRI whilst seeing pictures and receiving tastes of pleasant and unpleasant food. We applied GPCs to discriminate between the appetitive and aversive sights and tastes using functional activity patterns. Results The diagnostic accuracy of the GPC for the accuracy to discriminate appetitive taste from neutral condition was 86.5% (specificity = 81%, sensitivity = 92%, p = 0.001). If a participant experienced neutral taste stimuli the probability of correct classification was 92. The accuracy to discriminate aversive from neutral taste stimuli was 82.5% (specificity = 73%, sensitivity = 92%, p = 0.001) and appetitive from aversive taste stimuli was 73% (specificity = 77%, sensitivity = 69%, p = 0.001). In the sight modality, the accuracy to discriminate appetitive from neutral condition was 88.5% (specificity = 85%, sensitivity = 92%, p = 0.001), to discriminate aversive from neutral sight stimuli was 92% (specificity = 92%, sensitivity = 92%, p = 0.001), and to discriminate aversive from appetitive sight stimuli was 63.5% (specificity = 73%, sensitivity = 54%, p = 0.009). Conclusions Our results demonstrate the predictive value of neurofunctional data in discriminating emotional and neutral networks of activity in the healthy human brain. It would be of interest to use pattern recognition techniques and fMRI to examine network dysfunction in the processing of appetitive, aversive and neutral stimuli in psychiatric disorders. Especially where problems with reward and punishment processing have been implicated in the pathophysiology of the disorder. PMID:27870866
Effect of aberrations in human eye on contrast sensitivity function
NASA Astrophysics Data System (ADS)
Quan, Wei; Wang, Feng-lin; Wang, Zhao-qi
2011-06-01
The quantitative analysis of the effect of aberrations in human eye on vision has important clinical value in the correction of aberrations. The wave-front aberrations of human eyes were measured with the Hartmann-Shack wave-front sensor and modulation transfer function (MTF) was computed from the wave-front aberrations. Contrast sensitivity function (CSF) was obtained from MTF and the retinal aerial image modulation (AIM). It is shown that the 2nd, 3rd, 4th, 5th, 6th Zernike aberrations deteriorate contrast sensitivity function. When the 2nd, 3rd, 4th, 5th, 6th Zernike aberrations are corrected high contrast sensitivity function can be obtained.
[The design and applications of a non-invasive intelligent detector for cardiovascular functions].
Li, Feng; Xing, Wu; Chen, Ming-zhi; Shang, Huai
2006-05-01
An apparatus based on a high sensitive sensor which detects cardiovascular functions is introduced in this paper. Some intelligent detecting technologies, such as syntactic pattern recognition and a medical expert system are used in this detector. Its embedded single-chip microcomputer processes and analyzes pulse signals for gaining automatically the parameters about heart, blood vessel and blood etc., so as to get the health evaluation, correct medical diagnosis and prediction of cardiovascular diseases.
Pakhomov, Serguei Vs; Shah, Nilay D; Hanson, Penny; Balasubramaniam, Saranya C; Smith, Steven A
2010-01-01
Low-dose aspirin reduces cardiovascular risk; however, monitoring over-the-counter medication use relies on the time-consuming and costly manual review of medical records. Our objective is to validate natural language processing (NLP) of the electronic medical record (EMR) for extracting medication exposure and contraindication information. The text of EMRs for 499 patients with type 2 diabetes was searched using NLP for evidence of aspirin use and its contraindications. The results were compared to a standardised manual records review. Of the 499 patients, 351 (70%) were using aspirin and 148 (30%) were not, according to manual review. NLP correctly identified 346 of the 351 aspirin-positive and 134 of the 148 aspirin-negative patients, indicating a sensitivity of 99% (95% CI 97-100) and specificity of 91% (95% CI 88-97). Of the 148 aspirin-negative patients, 66 (45%) had contraindications and 82 (55%) did not, according to manual review. NLP search for contraindications correctly identified 61 of the 66 patients with contraindications and 58 of the 82 patients without, yielding a sensitivity of 92% (95% CI 84-97) and a specificity of 71% (95% CI 60-80). NLP of the EMR is accurate in ascertaining documented aspirin use and could potentially be used for epidemiological research as a source of cardiovascular risk factor information.
NASA Astrophysics Data System (ADS)
Koon, Daniel W.; Wang, Fei; Petersen, Dirch Hjorth; Hansen, Ole
2014-10-01
We derive exact, analytic expressions for the sensitivity of sheet resistance and Hall sheet resistance measurements to local inhomogeneities for the cases of nonzero magnetic fields, strong perturbations, and perturbations over a finite area, extending our earlier results on weak perturbations. We express these sensitivities for conductance tensor components and for other charge transport quantities. Both resistive and Hall sensitivities, for a van der Pauw specimen in a finite magnetic field, are a superposition of the zero-field sensitivities to both sheet resistance and Hall sheet resistance. Strong perturbations produce a nonlinear correction term that depends on the strength of the inhomogeneity. Solution of the specific case of a finite-sized circular inhomogeneity coaxial with a circular specimen suggests a first-order correction for the general case. Our results are confirmed by computer simulations on both a linear four-point probe array on a large circular disc and a van der Pauw square geometry. Furthermore, the results also agree well with Náhlík et al. published experimental results for physical holes in a circular copper foil disc.
Neural evidence for enhanced error detection in major depressive disorder.
Chiu, Pearl H; Deldin, Patricia J
2007-04-01
Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.
Sensitivity estimation in time-of-flight list-mode positron emission tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herraiz, J. L.; Sitek, A., E-mail: sarkadiu@gmail.com
Purpose: An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data,more » which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. Methods: The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. Results: The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. Conclusions: A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.« less
Sensitivity estimation in time-of-flight list-mode positron emission tomography.
Herraiz, J L; Sitek, A
2015-11-01
An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data, which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.
NASA Astrophysics Data System (ADS)
Lavender, Samantha; Brito, Fabrice; Aas, Christina; Casu, Francesco; Ribeiro, Rita; Farres, Jordi
2014-05-01
Data challenges are becoming the new method to promote innovation within data-intensive applications; building or evolving user communities and potentially developing sustainable commercial services. These can utilise the vast amount of information (both in scope and volume) that's available online, and profits from reduced processing costs. Data Challenges are also closely related to the recent paradigm shift towards e-Science, also referred to as "data-intensive science'. The E-CEO project aims to deliver a collaborative platform that, through Data Challenge Contests, will improve the adoption and outreach of new applications and methods to processes Earth Observation (EO) data. Underneath, the backbone must be a common environment where the applications can be developed, deployed and executed. Then, the results need to be easily published in a common visualization platform for their effective validation, evaluation and transparent peer comparisons. Contest #3 is based around the atmospheric correction (AC) of ocean colour data with a particular focus on the use of auxiliary data files for processing Level 1 (Top of Atmosphere, TOA, calibrated radiances/reflectances) to Level 2 products (Bottom of Atmosphere, BOA, calibrated radiances/reflectance and derived products). Scientific researchers commonly accept the auxiliary inputs that they've been provided with and/or use the climatological data that accompanies the processing software; often because it can be difficult to obtain multiple data sources and convert them into a format the software accepts. Therefore, it's proposed to compare various ocean colour AC approaches and in the process study the uncertainties associated with using different meteorological auxiliary products for the processing of Medium Resolution Imaging Spectrometer (MERIS) i.e. the sensitivity of different atmospheric correction input assumptions.
The accuracy of parent-reported height and weight for 6-12 year old U.S. children.
Wright, Davene R; Glanz, Karen; Colburn, Trina; Robson, Shannon M; Saelens, Brian E
2018-02-12
Previous studies have examined correlations between BMI calculated using parent-reported and directly-measured child height and weight. The objective of this study was to validate correction factors for parent-reported child measurements. Concordance between parent-reported and investigator measured child height, weight, and BMI (kg/m 2 ) among participants in the Neighborhood Impact on Kids Study (n = 616) was examined using the Lin coefficient, where a value of ±1.0 indicates perfect concordance and a value of zero denotes non-concordance. A correction model for parent-reported height, weight, and BMI based on commonly collected demographic information was developed using 75% of the sample. This model was used to estimate corrected measures for the remaining 25% of the sample and measured concordance between correct parent-reported and investigator-measured values. Accuracy of corrected values in classifying children as overweight/obese was assessed by sensitivity and specificity. Concordance between parent-reported and measured height, weight and BMI was low (0.007, - 0.039, and - 0.005 respectively). Concordance in the corrected test samples improved to 0.752 for height, 0.616 for weight, and 0.227 for BMI. Sensitivity of corrected parent-reported measures for predicting overweight and obesity among children in the test sample decreased from 42.8 to 25.6% while specificity improved from 79.5 to 88.6%. Correction factors improved concordance for height and weight but did not improve the sensitivity of parent-reported measures for measuring child overweight and obesity. Future research should be conducted using larger and more nationally-representative samples that allow researchers to fully explore demographic variance in correction coefficients.
van Honk, Jack; Schutter, Dennis J L G
2007-08-01
Elevated levels of testosterone have repeatedly been associated with antisocial behavior, but the psychobiological mechanisms underlying this effect are unknown. However, testosterone is evidently capable of altering the processing of facial threat, and facial signals of fear and anger serve sociality through their higher-level empathy-provoking and socially corrective properties. We investigated the hypothesis that testosterone predisposes people to antisocial behavior by reducing conscious recognition of facial threat. In a within-subjects design, testosterone (0.5 mg) or placebo was administered to 16 female volunteers. Afterward, a task with morphed stimuli indexed their sensitivity for consciously recognizing the facial expressions of threat (disgust, fear, and anger) and nonthreat (surprise, sadness, and happiness). Testosterone induced a significant reduction in the conscious recognition of facial threat overall. Separate analyses for the three categories of threat faces indicated that this effect was reliable for angry facial expressions exclusively. This testosterone-induced impairment in the conscious detection of the socially corrective facial signal of anger may predispose individuals to antisocial behavior.
Wyma, John M.; Herron, Timothy J.; Yund, E. William; Reed, Bruce
2018-01-01
The Paced Auditory Serial Addition Test (PASAT) is widely used to evaluate processing speed and executive function in patients with multiple sclerosis, traumatic brain injury, and other neurological disorders. In the PASAT, subjects listen to sequences of digits while continuously reporting the sum of the last two digits presented. Four different stimulus onset asynchronies (SOAs) are usually tested, with difficulty increasing as SOAs are reduced. Ceiling effects are common at long SOAs, while the digit delivery rate often exceeds the subject’s processing capacity at short SOAs, causing some subjects to stop performing altogether. In addition, subjects may adopt an “alternate answer” strategy at short SOAs, which reduces the test’s demands on working-memory and processing speed. Consequently, studies have shown that the number of dyads (consecutive correct answers) is a more sensitive measure of PASAT performance than the overall number of correct sums. Here, we describe a 2.5-minute computerized test, the Dyad-Adaptive PASAT (DA-PASAT), where SOAs are adjusted with a 2:1 staircase, decreasing after each pair of correct responses and increasing after misses. Processing capacity is reflected in the minimum SOA (minSOA) achieved in 54 trials. Experiment 1 gathered normative data in two large populations: 1617 subjects in New Zealand ranging in age from 18 to 65 years, and 214 Californians ranging in age from 18 to 82 years. Minimum SOAs were influenced by age, education, and daily hours of computer-use. Minimum SOA z-scores, calculated after factoring out the influence of these factors, were virtually identical in the two control groups, as were response times (RTs) and dyad ratios (the proportion of hits occurring in dyads). Experiment 2 measured the test-retest reliability of the DA-PASAT in 44 young subjects who underwent three test sessions at weekly intervals. High intraclass correlation coefficients (ICCs) were found for minSOAs (0.87), response times (0.76), and dyad ratios (0.87). Performance improved across test sessions for all measures. Experiment 3 investigated the effects of simulated malingering in 50 subjects: 42% of simulated malingerers produced abnormal (p< 0.05) minSOA z-scores. Simulated malingerers with abnormal scores were distinguished with 87% sensitivity and 69% specificity from control subjects with abnormal scores by excessive differences between training performance and the actual test. Experiment 4 investigated patients with traumatic brain injury (TBI): patients with mild TBI performed within the normal range while patients with severe TBI showed deficits. The DA-PASAT reduces the time and stress of PASAT assessment while gathering sensitive measures of dyad processing that reveal the effects of aging, malingering, and traumatic brain injury on performance. PMID:29677192
On the independence of visual awareness and metacognition: a signal detection theoretic analysis.
Jachs, Barbara; Blanco, Manuel J; Grantham-Hill, Sarah; Soto, David
2015-04-01
Classically, visual awareness and metacognition are thought to be intimately linked, with our knowledge of the correctness of perceptual choices (henceforth metacognition) being dependent on the level of stimulus awareness. Here we used a signal detection theoretic approach involving a Gabor orientation discrimination task in conjunction with trial-by-trial ratings of perceptual awareness and response confidence in order to gauge estimates of type-1 (perceptual) orientation sensitivity and type-2 (metacognitive) sensitivity at different levels of stimulus awareness. Data from three experiments indicate that while the level of stimulus awareness had a profound impact on type-1 perceptual sensitivity, the awareness effect on type-2 metacognitive sensitivity was far lower by comparison. The present data pose a challenge for signal detection theoretic models in which both type-1 (perceptual) and type-2 (metacognitive) processes are assumed to operate on the same input. More broadly, the findings challenge the commonly held view that metacognition is tightly coupled to conscious states. (c) 2015 APA, all rights reserved.
Role of chemoreception in cardiorespiratory acclimatization to, and deacclimatization from, hypoxia
Powell, Frank L.; Bisgard, Gerald E.; Blain, Gregory M.; Poulin, Marc J.; Smith, Curtis A.
2013-01-01
During sojourn to high altitudes, progressive time-dependent increases occur in ventilation and in sympathetic nerve activity over several days, and these increases persist upon acute restoration of normoxia. We discuss evidence concerning potential mediators of these changes, including the following: 1) correction of alkalinity in cerebrospinal fluid; 2) increased sensitivity of carotid chemoreceptors; and 3) augmented translation of carotid chemoreceptor input (at the level of the central nervous system) into increased respiratory motor output via sensitization of hypoxic sensitive neurons in the central nervous system and/or an interdependence of central chemoreceptor responsiveness on peripheral chemoreceptor sensory input. The pros and cons of chemoreceptor sensitization and cardiorespiratory acclimatization to hypoxia and intermittent hypoxemia are also discussed in terms of their influences on arterial oxygenation, the work of breathing, sympathoexcitation, systemic blood pressure, and exercise performance. We propose that these adaptive processes may have negative implications for the cardiovascular health of patients with sleep apnea and perhaps even for athletes undergoing regimens of “sleep high-train low”! PMID:24371017
Role of chemoreception in cardiorespiratory acclimatization to, and deacclimatization from, hypoxia.
Dempsey, Jerome A; Powell, Frank L; Bisgard, Gerald E; Blain, Gregory M; Poulin, Marc J; Smith, Curtis A
2014-04-01
During sojourn to high altitudes, progressive time-dependent increases occur in ventilation and in sympathetic nerve activity over several days, and these increases persist upon acute restoration of normoxia. We discuss evidence concerning potential mediators of these changes, including the following: 1) correction of alkalinity in cerebrospinal fluid; 2) increased sensitivity of carotid chemoreceptors; and 3) augmented translation of carotid chemoreceptor input (at the level of the central nervous system) into increased respiratory motor output via sensitization of hypoxic sensitive neurons in the central nervous system and/or an interdependence of central chemoreceptor responsiveness on peripheral chemoreceptor sensory input. The pros and cons of chemoreceptor sensitization and cardiorespiratory acclimatization to hypoxia and intermittent hypoxemia are also discussed in terms of their influences on arterial oxygenation, the work of breathing, sympathoexcitation, systemic blood pressure, and exercise performance. We propose that these adaptive processes may have negative implications for the cardiovascular health of patients with sleep apnea and perhaps even for athletes undergoing regimens of "sleep high-train low"!
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
Rosenthal, Eben L; Moore, Lindsay S; Tipirneni, Kiranya; de Boer, Esther; Stevens, Todd M; Hartman, Yolanda E; Carroll, William R; Zinn, Kurt R; Warram, Jason M
2017-08-15
Purpose: Comprehensive cervical lymphadenectomy can be associated with significant morbidity and poor quality of life. This study evaluated the sensitivity and specificity of cetuximab-IRDye800CW to identify metastatic disease in patients with head and neck cancer. Experimental Design: Consenting patients scheduled for curative resection were enrolled in a clinical trial to evaluate the safety and specificity of cetuximab-IRDye800CW. Patients ( n = 12) received escalating doses of the study drug. Where indicated, cervical lymphadenectomy accompanied primary tumor resection, which occurred 3 to 7 days following intravenous infusion of cetuximab-IRDye800CW. All 471 dissected lymph nodes were imaged with a closed-field, near-infrared imaging device during gross processing of the fresh specimens. Intraoperative imaging of exposed neck levels was performed with an open-field fluorescence imaging device. Blinded assessments of the fluorescence data were compared to histopathology to calculate sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV). Results: Of the 35 nodes diagnosed pathologically positive, 34 were correctly identified with fluorescence imaging, yielding a sensitivity of 97.2%. Of the 435 pathologically negative nodes, 401 were correctly assessed using fluorescence imaging, yielding a specificity of 92.7%. The NPV was determined to be 99.7%, and the PPV was 50.7%. When 37 fluorescently false-positive nodes were sectioned deeper (1 mm) into their respective blocks, metastatic cancer was found in 8.1% of the recut nodal specimens, which altered staging in two of those cases. Conclusions: Fluorescence imaging of lymph nodes after systemic cetuximab-IRDye800CW administration demonstrated high sensitivity and was capable of identifying additional positive nodes on deep sectioning. Clin Cancer Res; 23(16); 4744-52. ©2017 AACR . ©2017 American Association for Cancer Research.
Grandchamp, Romain; Delorme, Arnaud
2011-01-01
In electroencephalography, the classical event-related potential model often proves to be a limited method to study complex brain dynamics. For this reason, spectral techniques adapted from signal processing such as event-related spectral perturbation (ERSP) – and its variant event-related synchronization and event-related desynchronization – have been used over the past 20 years. They represent average spectral changes in response to a stimulus. These spectral methods do not have strong consensus for comparing pre- and post-stimulus activity. When computing ERSP, pre-stimulus baseline removal is usually performed after averaging the spectral estimate of multiple trials. Correcting the baseline of each single-trial prior to averaging spectral estimates is an alternative baseline correction method. However, we show that this method leads to positively skewed post-stimulus ERSP values. We eventually present new single-trial-based ERSP baseline correction methods that perform trial normalization or centering prior to applying classical baseline correction methods. We show that single-trial correction methods minimize the contribution of artifactual data trials with high-amplitude spectral estimates and are robust to outliers when performing statistical inference testing. We then characterize these methods in terms of their time–frequency responses and behavior compared to classical ERSP methods. PMID:21994498
NASA Astrophysics Data System (ADS)
Israel, Holger; Massey, Richard; Prod'homme, Thibaut; Cropper, Mark; Cordes, Oliver; Gow, Jason; Kohley, Ralf; Marggraf, Ole; Niemi, Sami; Rhodes, Jason; Short, Alex; Verhoeve, Peter
2015-10-01
Radiation damage to space-based charge-coupled device detectors creates defects which result in an increasing charge transfer inefficiency (CTI) that causes spurious image trailing. Most of the trailing can be corrected during post-processing, by modelling the charge trapping and moving electrons back to where they belong. However, such correction is not perfect - and damage is continuing to accumulate in orbit. To aid future development, we quantify the limitations of current approaches, and determine where imperfect knowledge of model parameters most degrades measurements of photometry and morphology. As a concrete application, we simulate 1.5 × 109 `worst-case' galaxy and 1.5 × 108 star images to test the performance of the Euclid visual instrument detectors. There are two separable challenges. If the model used to correct CTI is perfectly the same as that used to add CTI, 99.68 per cent of spurious ellipticity is corrected in our setup. This is because readout noise is not subject to CTI, but gets overcorrected during correction. Secondly, if we assume the first issue to be solved, knowledge of the charge trap density within Δρ/ρ = (0.0272 ± 0.0005) per cent and the characteristic release time of the dominant species to be known within Δτ/τ = (0.0400 ± 0.0004) per cent will be required. This work presents the next level of definition of in-orbit CTI calibration procedures for Euclid.
NASA Astrophysics Data System (ADS)
Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan
2015-06-01
An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.
Brain single-photon emission CT physics principles.
Accorsi, R
2008-08-01
The basic principles of scintigraphy are reviewed and extended to 3D imaging. Single-photon emission computed tomography (SPECT) is a sensitive and specific 3D technique to monitor in vivo functional processes in both clinical and preclinical studies. SPECT/CT systems are becoming increasingly common and can provide accurately registered anatomic information as well. In general, SPECT is affected by low photon-collection efficiency, but in brain imaging, not all of the large FOV of clinical gamma cameras is needed: The use of fan- and cone-beam collimation trades off the unused FOV for increased sensitivity and resolution. The design of dedicated cameras aims at increased angular coverage and resolution by minimizing the distance from the patient. The corrections needed for quantitative imaging are challenging but can take advantage of the relative spatial uniformity of attenuation and scatter. Preclinical systems can provide submillimeter resolution in small animal brain imaging with workable sensitivity.
Number-counts slope estimation in the presence of Poisson noise
NASA Technical Reports Server (NTRS)
Schmitt, Juergen H. M. M.; Maccacaro, Tommaso
1986-01-01
The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.
Lu, Weiping; Gu, Dayong; Chen, Xingyun; Xiong, Renping; Liu, Ping; Yang, Nan; Zhou, Yuanguo
2010-10-01
The traditional techniques for diagnosis of invasive fungal infections in the clinical microbiology laboratory need improvement. These techniques are prone to delay results due to their time-consuming process, or result in misidentification of the fungus due to low sensitivity or low specificity. The aim of this study was to develop a method for the rapid detection and identification of fungal pathogens. The internal transcribed spacer two fragments of fungal ribosomal DNA were amplified using a polymerase chain reaction for all samples. Next, the products were hybridized with the probes immobilized on the surface of a microarray. These species-specific probes were designed to detect nine different clinical pathogenic fungi including Candida albicans, Candida tropocalis, Candida glabrata, Candida parapsilosis, Candida krusei, Candida lusitaniae, Candida guilliermondii, Candida keyfr, and Cryptococcus neoformans. The hybridizing signals were enhanced with gold nanoparticles and silver deposition, and detected using a flatbed scanner or visually. Fifty-nine strains of fungal pathogens, including standard and clinically isolated strains, were correctly identified by this method. The sensitivity of the assay for Candida albicans was 10 cells/mL. Ten cultures from clinical specimens and 12 clinical samples spiked with fungi were also identified correctly. This technique offers a reliable alternative to conventional methods for the detection and identification of fungal pathogens. It has higher efficiency, specificity and sensitivity compared with other methods commonly used in the clinical laboratory.
Induced polarization for characterizing and monitoring soil stabilization processes
NASA Astrophysics Data System (ADS)
Saneiyan, S.; Ntarlagiannis, D.; Werkema, D. D., Jr.
2017-12-01
Soil stabilization is critical in addressing engineering problems related to building foundation support, road construction and soil erosion among others. To increase soil strength, the stiffness of the soil is enhanced through injection/precipitation of a chemical agents or minerals. Methods such as cement injection and microbial induced carbonate precipitation (MICP) are commonly applied. Verification of a successful soil stabilization project is often challenging as treatment areas are spatially extensive and invasive sampling is expensive, time consuming and limited to sporadic points at discrete times. The geophysical method, complex conductivity (CC), is sensitive to mineral surface properties, hence a promising method to monitor soil stabilization projects. Previous laboratory work has established the sensitivity of CC on MICP processes. We performed a MICP soil stabilization projects and collected CC data for the duration of the treatment (15 days). Subsurface images show small, but very clear changes, in the area of MICP treatment; the changes observed fully agree with the bio-geochemical monitoring, and previous laboratory experiments. Our results strongly suggest that CC is sensitive to field MICP treatments. Finally, our results show that good quality data alone are not adequate for the correct interpretation of field CC data, at least when the signals are low. Informed data processing routines and the inverse modeling parameters are required to produce optimal results.
Karge, Lukas; Gilles, Ralph
2017-01-01
An improved data-reduction procedure is proposed and demonstrated for small-angle neutron scattering (SANS) measurements. Its main feature is the correction of geometry- and wavelength-dependent intensity variations on the detector in a separate step from the different pixel sensitivities: the geometric and wavelength effects can be corrected analytically, while pixel sensitivities have to be calibrated to a reference measurement. The geometric effects are treated for position-sensitive 3He proportional counter tubes, where they are anisotropic owing to the cylindrical geometry of the gas tubes. For the calibration of pixel sensitivities, a procedure is developed that is valid for isotropic and anisotropic signals. The proposed procedure can save a significant amount of beamtime which has hitherto been used for calibration measurements. PMID:29021734
Modeling motivated misreports to sensitive survey questions.
Böckenholt, Ulf
2014-07-01
Asking sensitive or personal questions in surveys or experimental studies can both lower response rates and increase item non-response and misreports. Although non-response is easily diagnosed, misreports are not. However, misreports cannot be ignored because they give rise to systematic bias. The purpose of this paper is to present a modeling approach that identifies misreports and corrects for them. Misreports are conceptualized as a motivated process under which respondents edit their answers before they report them. For example, systematic bias introduced by overreports of socially desirable behaviors or underreports of less socially desirable ones can be modeled, leading to more-valid inferences. The proposed approach is applied to a large-scale experimental study and shows that respondents who feel powerful tend to overclaim their knowledge.
Villiger, Martin; Zhang, Ellen Ziyi; Nadkarni, Seemantini K.; Oh, Wang-Yuhl; Vakoc, Benjamin J.; Bouma, Brett E.
2013-01-01
Polarization mode dispersion (PMD) has been recognized as a significant barrier to sensitive and reproducible birefringence measurements with fiber-based, polarization-sensitive optical coherence tomography systems. Here, we present a signal processing strategy that reconstructs the local retardation robustly in the presence of system PMD. The algorithm uses a spectral binning approach to limit the detrimental impact of system PMD and benefits from the final averaging of the PMD-corrected retardation vectors of the spectral bins. The algorithm was validated with numerical simulations and experimental measurements of a rubber phantom. When applied to the imaging of human cadaveric coronary arteries, the algorithm was found to yield a substantial improvement in the reconstructed birefringence maps. PMID:23938487
Reproducing American Sign Language sentences: cognitive scaffolding in working memory
Supalla, Ted; Hauser, Peter C.; Bavelier, Daphne
2014-01-01
The American Sign Language Sentence Reproduction Test (ASL-SRT) requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall) and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects' recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies when they failed to recall the sentence correctly. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are considered. PMID:25152744
Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo
2016-01-01
The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.
Patlewicz, Grace; Casati, Silvia; Basketter, David A; Asturiol, David; Roberts, David W; Lepoittevin, Jean-Pierre; Worth, Andrew P; Aschberger, Karin
2016-12-01
Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal tests such as the Local Lymph Node Assay (LLNA). In recent years, regulations in the cosmetics and chemicals sectors have provided strong impetus to develop non-animal alternatives. Three test methods have undergone OECD validation: the direct peptide reactivity assay (DPRA), the KeratinoSens™ and the human Cell Line Activation Test (h-CLAT). Whilst these methods perform relatively well in predicting LLNA results, a concern raised is their ability to predict chemicals that need activation to be sensitizing (pre- or pro-haptens). This current study reviewed an EURL ECVAM dataset of 127 substances for which information was available in the LLNA and three non-animal test methods. Twenty eight of the sensitizers needed to be activated, with the majority being pre-haptens. These were correctly identified by 1 or more of the test methods. Six substances were categorized exclusively as pro-haptens, but were correctly identified by at least one of the cell-based assays. The analysis here showed that skin metabolism was not likely to be a major consideration for assessing sensitization potential and that sensitizers requiring activation could be identified correctly using one or more of the current non-animal methods. Published by Elsevier Inc.
Effective classification of the prevalence of Schistosoma mansoni.
Mitchell, Shira A; Pagano, Marcello
2012-12-01
To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.
Alvarez-Jimenez, Ricardo; Groeneveld, Geert Jan; van Gerven, Joop M A; Goulooze, Sebastiaan C; Baakman, Anne Catrien; Hay, Justin L; Stevens, Jasper
2016-10-01
Subjects with increasing age are more sensitive to the effects of the anti-muscarinic agent scopolamine, which is used (among other indications) to induce temporary cognitive dysfunction in early phase drug studies with cognition enhancing compounds. The enhanced sensitivity has always been attributed to incipient cholinergic neuronal dysfunction, as a part of the normal aging process. The aim of the study was to correlate age-dependent pharmacodynamic neuro-physiologic effects of scopolamine after correcting for differences in individual exposure. We applied a pharmacokinetic and pharmacodynamic modelling approach to describe individual exposure and neurocognitive effects of intravenous scopolamine administration in healthy subjects. A two-compartment linear kinetics model best described the plasma concentrations of scopolamine. The estimated scopolamine population mean apparent central and peripheral volume of distribution was 2.66 ± 1.050 l and 62.10 ± 10.100 l, respectively and the clearance was 1.09 ± 0.096 l min(-1) . Age was not related to a decrease of performance in the tests following scopolamine administration in older subjects. Only the saccadic peak velocity showed a positive correlation between age and sensitivity to scopolamine. Age was, however, correlated at baseline with an estimated slower reaction time while performing the cognitive tests and to higher global δ and frontal θ frequency bands measured with the surface EEG. Most of the differences in response to scopolamine administration between young and older subjects could be explained by pharmacokinetic differences (lower clearance) and not to an enhanced sensitivity when corrected for exposure levels. © 2016 The British Pharmacological Society.
Groeneveld, Geert Jan; van Gerven, Joop M. A.; Goulooze, Sebastiaan C.; Baakman, Anne Catrien; Hay, Justin L.; Stevens, Jasper
2016-01-01
Aim Subjects with increasing age are more sensitive to the effects of the anti‐muscarinic agent scopolamine, which is used (among other indications) to induce temporary cognitive dysfunction in early phase drug studies with cognition enhancing compounds. The enhanced sensitivity has always been attributed to incipient cholinergic neuronal dysfunction, as a part of the normal aging process. The aim of the study was to correlate age‐dependent pharmacodynamic neuro‐physiologic effects of scopolamine after correcting for differences in individual exposure. Methods We applied a pharmacokinetic and pharmacodynamic modelling approach to describe individual exposure and neurocognitive effects of intravenous scopolamine administration in healthy subjects. Results A two‐compartment linear kinetics model best described the plasma concentrations of scopolamine. The estimated scopolamine population mean apparent central and peripheral volume of distribution was 2.66 ± 1.050 l and 62.10 ± 10.100 l, respectively and the clearance was 1.09 ± 0.096 l min−1. Age was not related to a decrease of performance in the tests following scopolamine administration in older subjects. Only the saccadic peak velocity showed a positive correlation between age and sensitivity to scopolamine. Age was, however, correlated at baseline with an estimated slower reaction time while performing the cognitive tests and to higher global δ and frontal θ frequency bands measured with the surface EEG. Conclusions Most of the differences in response to scopolamine administration between young and older subjects could be explained by pharmacokinetic differences (lower clearance) and not to an enhanced sensitivity when corrected for exposure levels. PMID:27273555
Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin
2012-01-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561
Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin
2011-09-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.
B1- non-uniformity correction of phased-array coils without measuring coil sensitivity.
Damen, Frederick C; Cai, Kejia
2018-04-18
Parallel imaging can be used to increase SNR and shorten acquisition times, albeit, at the cost of image non-uniformity. B 1 - non-uniformity correction techniques are confounded by signal that varies not only due to coil induced B 1 - sensitivity variation, but also the object's own intrinsic signal. Herein, we propose a method that makes minimal assumptions and uses only the coil images themselves to produce a single combined B 1 - non-uniformity-corrected complex image with the highest available SNR. A novel background noise classifier is used to select voxels of sufficient quality to avoid the need for regularization. Unique properties of the magnitude and phase were used to reduce the B 1 - sensitivity to two joint additive models for estimation of the B 1 - inhomogeneity. The complementary corruption of the imaged object across the coil images is used to abate individual coil correction imperfections. Results are presented from two anatomical cases: (a) an abdominal image that is challenging in both extreme B 1 - sensitivity and intrinsic tissue signal variation, and (b) a brain image with moderate B 1 - sensitivity and intrinsic tissue signal variation. A new relative Signal-to-Noise Ratio (rSNR) quality metric is proposed to evaluate the performance of the proposed method and the RF receiving coil array. The proposed method has been shown to be robust to imaged objects with widely inhomogeneous intrinsic signal, and resilient to poorly performing coil elements. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Maintas, Dimitris; Houzard, Claire; Ksyar, Rachid; Mognetti, Thomas; Maintas, Catherine; Scheiber, Christian; Itti, Roland
2006-12-01
It is considered that one of the great strengths of PET imaging is the ability to correct for body attenuation. This enables better lesion uptake quantification and quality of PET images. The aim of this work is to compare the sensitivity of non-attenuation-corrected (NAC) PET images, the gamma photons (GPAC) and CT attenuation-corrected (CTAC) images in detecting and staging of lung cancer. We have studied 66 patients undergoing PET/CT examinations for detecting and staging NSC lung cancer. The patients were injected with 18-FDG; 5 MBq/kg under fasting conditions and examination was started 60 min later. Transmission data were acquired by a spiral CT X-ray tube and by gamma photons emitting Cs-137l source and were used for the patient body attenuation correction without correction for respiratory motion. In 55 of 66 patients we performed both attenuation correction procedures and in 11 patients only CT attenuation correction. In seven patients with solitary nodules PET was negative and in 59 patients with lung cancer PET/CT was positive for pulmonary or other localization. In the group of 55 patients we found 165 areas of focal increased 18-FDG uptake in NAC, 165 in CTAC and 164 in GPAC PET images.In the patients with only CTAC we found 58 areas of increased 18-FDG uptake on NAC and 58 areas lesions on CTAC. In the patients with positive PET we found 223 areas of focal increased uptake in NAC and 223 areas in CTAC images. The sensitivity of NAC was equal to the sensitivity of CTAC and GPAC images. The visualization of peripheral lesions was better in NAC images and the lesions were better localized in attenuation-corrected images. In three lesions of the thorax the localization was better in GPAC and fused images than in CTAC images.
Williams, Javonda; Nelson-Gardell, Debra; Coulborn Faller, Kathleen; Tishelman, Amy; Cordisco-Steele, Linda
2014-01-01
Using data from a survey of perceptions of 932 child welfare professionals about the utility of extended assessments, the researchers constructed a scale to measure respondents' views about sensitivity (ensuring sexually abused children are correctly identified) and specificity (ensuring nonabused children are correctly identified) in child sexual abuse evaluations. On average, respondents scored high (valuing sensitivity) on the sensitivity versus specificity scale. Next, the researchers undertook bivariate analyses to identify independent variables significantly associated with the sensitivity versus specificity scale. Then those variables were entered into a multiple regression. Four independent variables were significantly related to higher sensitivity scores: encountering cases requiring extended assessments, valuing extended assessments among scarce resources, less concern about proving cases in court, and viewing the goal of extended assessments as understanding needs of child and family (adjusted R2 = .34).
A novel methodology for litho-to-etch pattern fidelity correction for SADP process
NASA Astrophysics Data System (ADS)
Chen, Shr-Jia; Chang, Yu-Cheng; Lin, Arthur; Chang, Yi-Shiang; Lin, Chia-Chi; Lai, Jun-Cheng
2017-03-01
For 2x nm node semiconductor devices and beyond, more aggressive resolution enhancement techniques (RETs) such as source-mask co-optimization (SMO), litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) are utilized for the low k1 factor lithography processes. In the SADP process, the pattern fidelity is extremely critical since a slight photoresist (PR) top-loss or profile roughness may impact the later core trim process, due to its sensitivity to environment. During the subsequent sidewall formation and core removal processes, the core trim profile weakness may worsen and induces serious defects that affect the final electrical performance. To predict PR top-loss, a rigorous lithography simulation can provide a reference to modify mask layouts; but it takes a much longer run time and is not capable of full-field mask data preparation. In this paper, we first brought out an algorithm which utilizes multi-intensity levels from conventional aerial image simulation to assess the physical profile through lithography to core trim etching steps. Subsequently, a novel correction method was utilized to improve the post-etch pattern fidelity without the litho. process window suffering. The results not only matched PR top-loss in rigorous lithography simulation, but also agreed with post-etch wafer data. Furthermore, this methodology can also be incorporated with OPC and post-OPC verification to improve core trim profile and final pattern fidelity at an early stage.
Fuentes-Claramonte, Paola; Ávila, César; Rodríguez-Pujadas, Aina; Costumero, Víctor; Ventura-Campos, Noelia; Bustamante, Juan Carlos; Rosell-Negre, Patricia; Barrós-Loscertales, Alfonso
2016-01-01
A "disinhibited" cognitive profile has been proposed for individuals with high reward sensitivity, characterized by increased engagement in goal-directed responses and reduced processing of negative or unexpected cues, which impairs adequate behavioral regulation after feedback in these individuals. This pattern is manifested through deficits in inhibitory control and/or increases in RT variability. In the present work, we aimed to test whether this profile is associated with the activity of functional networks during a stop-signal task using independent component analysis (ICA). Sixty-one participants underwent fMRI while performing a stop-signal task, during which a manual response had to be inhibited. ICA was used to mainly replicate the functional networks involved in the task (Zhang and Li, 2012): two motor networks involved in the go response, the left and right fronto-parietal networks for stopping, a midline error-processing network, and the default-mode network (DMN), which was further subdivided into its anterior and posterior parts. Reward sensitivity was mainly associated with greater activity of motor networks, reduced activity in the midline network during correct stop trials and, behaviorally, increased RT variability. All these variables explained 36% of variance of the SR scores. This pattern of associations suggests that reward sensitivity involves greater motor engagement in the dominant response, more distractibility and reduced processing of salient or unexpected events, which may lead to disinhibited behavior. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Xiaowei; Chen, Mingyong; Zhu, Jianhua; Ma, Yanqin; Du, Jinglei; Guo, Yongkang; Du, Chunlei
2006-01-01
A novel method for the fabrication of continuous micro-optical components is presented in this paper. It employs a computer controlled digital-micromirror-device(DMD TM) as a switchable projection mask and silver-halide sensitized gelatin (SHSG) as recording material. By etching SHSG with enzyme solution, the micro-optical components with relief modulation can be generated through special processing procedures. The principles of etching SHSG with enzyme and theoretical analysis for deep etching are also discussed in detail, and the detailed quantitative experiments on the processing procedures are conducted to determine optimum technique parameters. A good linear relationship within a depth range of 4μm was experimentally obtained between exposure dose and relief depth. At last, the microlensarray with 256.8μm radius and 2.572μm depth was achieved. This method is simple, cheap and the aberration in processing procedures can be corrected in the step of designing mask, so it is a practical method to fabricate good continuous profile for low-volume production.
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.
Plasma process control with optical emission spectroscopy
NASA Astrophysics Data System (ADS)
Ward, P. P.
Plasma processes for cleaning, etching and desmear of electronic components and printed wiring boards (PWB) are difficult to predict and control. Non-uniformity of most plasma processes and sensitivity to environmental changes make it difficult to maintain process stability from day to day. To assure plasma process performance, weight loss coupons or post-plasma destructive testing must be used. The problem with these techniques is that they are not real-time methods and do not allow for immediate diagnosis and process correction. These methods often require scrapping some fraction of a batch to insure the integrity of the rest. Since these methods verify a successful cycle with post-plasma diagnostics, poor test results often determine that a batch is substandard and the resulting parts unusable. Both of these methods are a costly part of the overall fabrication cost. A more efficient method of testing would allow for constant monitoring of plasma conditions and process control. Process failures should be detected before the parts being treated. are damaged. Real time monitoring would allow for instantaneous corrections. Multiple site monitoring would allow for process mapping within one system or simultaneous monitoring of multiple systems. Optical emission spectroscopy conducted external to the plasma apparatus would allow for this sort of multifunctional analysis without perturbing the glow discharge. In this paper, optical emission spectroscopy for non-intrusive, in situ process control will be explored. A discussion of this technique as it applies towards process control, failure analysis and endpoint determination will be conducted. Methods for identifying process failures, progress and end of etch back and desmear processes will be discussed.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940
NASA Technical Reports Server (NTRS)
Abbott, Mark R.
1996-01-01
Our first activity is based on delivery of code to Bob Evans (University of Miami) for integration and eventual delivery to the MODIS Science Data Support Team. As we noted in our previous semi-annual report, coding required the development and analysis of an end-to-end model of fluorescence line height (FLH) errors and sensitivity. This model is described in a paper in press in Remote Sensing of the Environment. Once the code was delivered to Miami, we continue to use this error analysis to evaluate proposed changes in MODIS sensor specifications and performance. Simply evaluating such changes on a band by band basis may obscure the true impacts of changes in sensor performance that are manifested in the complete algorithm. This is especially true with FLH that is sensitive to band placement and width. The error model will be used by Howard Gordon (Miami) to evaluate the effects of absorbing aerosols on the FLH algorithm performance. Presently, FLH relies only on simple corrections for atmospheric effects (viewing geometry, Rayleigh scattering) without correcting for aerosols. Our analysis suggests that aerosols should have a small impact relative to changes in the quantum yield of fluorescence in phytoplankton. However, the effect of absorbing aerosol is a new process and will be evaluated by Gordon.
Peptide de novo sequencing of mixture tandem mass spectra
Hotta, Stéphanie Yuki Kolbeck; Verano‐Braga, Thiago; Kjeldsen, Frank
2016-01-01
The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co‐isolation and thus prone to false identifications. The deconvolution approach matched complementary b‐, y‐ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co‐isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20–35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. PMID:27329701
The Snapshot A-Star SurveY (SASSY)
NASA Astrophysics Data System (ADS)
Garani, Jasmine; Nielsen, Eric L.; Marchis, Franck; Liu, Michael C.; Macintosh, Bruce; Rajan, Abhijith; De Rosa, Robert J.; Wang, Jason; Esposito, Thomas; Best, William M. J.; Bowler, Brendan P.; Dupuy, Trent J.; Ruffio, Jean-Baptise
2017-01-01
We present the Snapshot A-Star SurveY (SASSY), an adaptive optics survey conducted using NIRC2 on the Keck II telescope to search for young, self-luminious planets and brown dwarfs (M > 5MJup) around high mass stars (M > 1.5 M⊙). We describe a custom data-reduction pipeline developed for the coronagraphic observations of our 200 target stars. Our data analysis method includes basic near infrared data processing (flat-field correction, bad pixel removal, distortion correction) as well as performing PSF subtraction through a Reference Differential Imaging algorithm based on a library of PSFs derived from the observations using the pyKLIP routine. We present early results from the survey including planet and brown dwarf candidates and the status of ongoing follow-up observations. Utilizing the high contrast of Keck NIRC2 coronagraphic observations, SASSY reaches sensitivity to brown dwarfs and planetary mass companions at separations between 0.6'' and 4''. With over 200 stars observed we are tripling the number of high-mass stars imaged at these contrasts and sensitivities compared to previous surveys. This work was supported by the NSF REU program at the SETI Institute and NASA grant NNX14AJ80G.
An Integrative Object-Based Image Analysis Workflow for Uav Images
NASA Astrophysics Data System (ADS)
Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong
2016-06-01
In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.
When speaker identity is unavoidable: Neural processing of speaker identity cues in natural speech.
Tuninetti, Alba; Chládková, Kateřina; Peter, Varghese; Schiller, Niels O; Escudero, Paola
2017-11-01
Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language. In a multi-deviant oddball paradigm, native and nonnative speakers of Dutch were exposed to naturally-produced Dutch vowels varying in speaker, sex, accent, and phoneme identity. Unexpectedly, the analysis of mismatch negativity (MMN) amplitudes elicited by each type of change shows a large degree of early perceptual sensitivity to non-linguistic cues. This finding on perception of naturally-produced stimuli contrasts with previous studies examining the perception of synthetic stimuli wherein adult listeners automatically disregard acoustic cues to speaker identity. The present finding bears relevance to speech normalization theories, suggesting that at an unattended level of processing, listeners are indeed sensitive to changes in fundamental frequency in natural speech tokens. Copyright © 2017 Elsevier Inc. All rights reserved.
Power, Jonathan D; Plitt, Mark; Kundu, Prantik; Bandettini, Peter A; Martin, Alex
2017-01-01
Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion).
Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring
Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu
2013-01-01
Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551
Plitt, Mark; Kundu, Prantik; Bandettini, Peter A.; Martin, Alex
2017-01-01
Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10–50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion). PMID:28880888
Schulz, Kurt P; Clerkin, Suzanne M; Fan, Jin; Halperin, Jeffrey M; Newcorn, Jeffrey H
2013-03-01
Functional interactions between limbic regions that process emotions and frontal networks that guide response functions provide a substrate for emotional cues to influence behavior. Stimulation of postsynaptic α₂ adrenoceptors enhances the function of prefrontal regions in these networks. However, the impact of this stimulation on the emotional biasing of behavior has not been established. This study tested the effect of the postsynaptic α₂ adrenoceptor agonist guanfacine on the emotional biasing of response execution and inhibition in prefrontal cortex. Fifteen healthy young adults were scanned twice with functional magnetic resonance imaging while performing a face emotion go/no-go task following counterbalanced administration of single doses of oral guanfacine (1 mg) and placebo in a double-blind, cross-over design. Lower perceptual sensitivity and less response bias for sad faces resulted in fewer correct responses compared to happy and neutral faces but had no effect on correct inhibitions. Guanfacine increased the sensitivity and bias selectively for sad faces, resulting in response accuracy comparable to happy and neutral faces, and reversed the valence-dependent variation in response-related activation in left dorsolateral prefrontal cortex (DLPFC), resulting in enhanced activation for response execution cued by sad faces relative to happy and neutral faces, in line with other frontoparietal regions. These results provide evidence that guanfacine stimulation of postsynaptic α₂ adrenoceptors moderates DLPFC activation associated with the emotional biasing of response execution processes. The findings have implications for the α₂ adrenoceptor agonist treatment of attention-deficit hyperactivity disorder.
Black, Bryan A; Griffin, Daniel; van der Sleen, Peter; Wanamaker, Alan D; Speer, James H; Frank, David C; Stahle, David W; Pederson, Neil; Copenheaver, Carolyn A; Trouet, Valerie; Griffin, Shelly; Gillanders, Bronwyn M
2016-07-01
High-resolution biogenic and geologic proxies in which one increment or layer is formed per year are crucial to describing natural ranges of environmental variability in Earth's physical and biological systems. However, dating controls are necessary to ensure temporal precision and accuracy; simple counts cannot ensure that all layers are placed correctly in time. Originally developed for tree-ring data, crossdating is the only such procedure that ensures all increments have been assigned the correct calendar year of formation. Here, we use growth-increment data from two tree species, two marine bivalve species, and a marine fish species to illustrate sensitivity of environmental signals to modest dating error rates. When falsely added or missed increments are induced at one and five percent rates, errors propagate back through time and eliminate high-frequency variability, climate signals, and evidence of extreme events while incorrectly dating and distorting major disturbances or other low-frequency processes. Our consecutive Monte Carlo experiments show that inaccuracies begin to accumulate in as little as two decades and can remove all but decadal-scale processes after as little as two centuries. Real-world scenarios may have even greater consequence in the absence of crossdating. Given this sensitivity to signal loss, the fundamental tenets of crossdating must be applied to fully resolve environmental signals, a point we underscore as the frontiers of growth-increment analysis continue to expand into tropical, freshwater, and marine environments. © 2016 John Wiley & Sons Ltd.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Image Reconstruction for a Partially Collimated Whole Body PET Scanner
Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.
2008-01-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731
Image Reconstruction for a Partially Collimated Whole Body PET Scanner.
Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E
2008-06-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.
Lueken, Ulrike; Straube, Benjamin; Yang, Yunbo; Hahn, Tim; Beesdo-Baum, Katja; Wittchen, Hans-Ulrich; Konrad, Carsten; Ströhle, Andreas; Wittmann, André; Gerlach, Alexander L; Pfleiderer, Bettina; Arolt, Volker; Kircher, Tilo
2015-09-15
Depression is frequent in panic disorder (PD); yet, little is known about its influence on the neural substrates of PD. Difficulties in fear inhibition during safety signal processing have been reported as a pathophysiological feature of PD that is attenuated by depression. We investigated the impact of comorbid depression in PD with agoraphobia (AG) on the neural correlates of fear conditioning and the potential of machine learning to predict comorbidity status on the individual patient level based on neural characteristics. Fifty-nine PD/AG patients including 26 (44%) with a comorbid depressive disorder (PD/AG+DEP) underwent functional magnetic resonance imaging (fMRI). Comorbidity status was predicted using a random undersampling tree ensemble in a leave-one-out cross-validation framework. PD/AG-DEP patients showed altered neural activation during safety signal processing, while +DEP patients exhibited generally decreased dorsolateral prefrontal and insular activation. Comorbidity status was correctly predicted in 79% of patients (sensitivity: 73%; specificity: 85%) based on brain activation during fear conditioning (corrected for potential confounders: accuracy: 73%; sensitivity: 77%; specificity: 70%). No primary depressed patients were available; only medication-free patients were included. Major depression and dysthymia were collapsed (power considerations). Neurofunctional activation during safety signal processing differed between patients with or without comorbid depression, a finding which may explain heterogeneous results across previous studies. These findings demonstrate the relevance of comorbidity when investigating neurofunctional substrates of anxiety disorders. Predicting individual comorbidity status may translate neurofunctional data into clinically relevant information which might aid in planning individualized treatment. The study was registered with the ISRCTN80046034. Copyright © 2015 Elsevier B.V. All rights reserved.
A Comparison of Off-Level Correction Techniques for Airborne Gravity using GRAV-D Re-Flights
NASA Astrophysics Data System (ADS)
Preaux, S. A.; Melachroinos, S.; Diehl, T. M.
2011-12-01
The airborne gravity data collected for the GRAV-D project contain a number of tracks which have been flown multiple times, either by design or due to data collection issues. Where viable data can be retrieved, these re-flights are a valuable resource not only for assessing the quality of the data but also for evaluating the relative effectiveness of various processing techniques. Correcting for the instantaneous misalignment of the gravimeter sensitive axis with local vertical has been a long standing challenge for stable platform airborne gravimetry. GRAV-D re-flights are used to compare the effectiveness of existing methods of computing this off-level correction (Valliant 1991, Peters and Brozena 1995, Swain 1996, etc.) and to assess the impact of possible modifications to these methods including pre-filtering accelerations, use of IMU horizontal accelerations in place of those derived from GPS positions and accurately compensating for GPS lever-arm and attitude effects prior to computing accelerations from the GPS positions (Melachroinos et al. 2010, B. de Saint-Jean, et al. 2005). The resulting corrected gravity profiles are compared to each other and to EGM08 in order to assess the accuracy and precision of each method. Preliminary results indicate that the methods presented in Peters & Brozena 1995 and Valliant 1991 completely correct the off-level error some of the time but only partially correct it others, while introducing an overall bias to the data of -0.5 to -2 mGal.
Quantitative magnetic resonance spectroscopy at 3T based on the principle of reciprocity.
Zoelch, Niklaus; Hock, Andreas; Henning, Anke
2018-05-01
Quantification of magnetic resonance spectroscopy signals using the phantom replacement method requires an adequate correction of differences between the acquisition of the reference signal in the phantom and the measurement in vivo. Applying the principle of reciprocity, sensitivity differences can be corrected at low field strength by measuring the RF transmitter gain needed to obtain a certain flip angle in the measured volume. However, at higher field strength the transmit sensitivity may vary from the reception sensitivity, which leads to wrongly estimated concentrations. To address this issue, a quantification approach based on the principle of reciprocity for use at 3T is proposed and validated thoroughly. In this approach, the RF transmitter gain is determined automatically using a volume-selective power optimization and complemented with information from relative reception sensitivity maps derived from contrast-minimized images to correct differences in transmission and reception sensitivity. In this way, a reliable measure of the local sensitivity was obtained. The proposed method is used to derive in vivo concentrations of brain metabolites and tissue water in two studies with different coil sets in a total of 40 healthy volunteers. Resulting molar concentrations are compared with results using internal water referencing (IWR) and Electric REference To access In vivo Concentrations (ERETIC). With the proposed method, changes in coil loading and regional sensitivity due to B 1 inhomogeneities are successfully corrected, as demonstrated in phantom and in vivo measurements. For the tissue water content, coefficients of variation between 2% and 3.5% were obtained (0.6-1.4% in a single subject). The coefficients of variation of the three major metabolites ranged from 3.4-14.5%. In general, the derived concentrations agree well with values estimated with IWR. Hence, the presented method is a valuable alternative for IWR, without the need for additional hardware such as ERETIC and with potential advantages in diseased tissue. Copyright © 2018 John Wiley & Sons, Ltd.
Technique sensitivity in bonding to enamel and dentin.
Powers, John M; Farah, John W
2010-09-01
Bonding to enamel and dentin has been among the most significant advancements in dentistry in the last five decades; extensive research and product development have resulted in more adhesive options. However, bonding to enamel and dentin still proves to be challenging, and selecting the correct product for a clinical application can be confusing. An incorrect choice can lead to insufficient bond strength. Day-to-day clinical factors, such as the presence of enamel, superficial dentin, or carious dentin, as well as contamination by saliva, blood, or bleaching agents, can cause bonding agents to be technique sensitive-they may fail prematurely if steps are not followed meticulously. This article attempts to simplify the selection process for enamel and dentinal bonding and summarize clinically relevant bonding information that will help produce consistently successful results.
Zγ production at NNLO including anomalous couplings
NASA Astrophysics Data System (ADS)
Campbell, John M.; Neumann, Tobias; Williams, Ciaran
2017-11-01
In this paper we present a next-to-next-to-leading order (NNLO) QCD calculation of the processes pp → l + l -γ and pp\\to ν \\overline{ν}γ that we have implemented in MCFM. Our calculation includes QCD corrections at NNLO both for the Standard Model (SM) and additionally in the presence of Zγγ and ZZγ anomalous couplings. We compare our implementation, obtained using the jettiness slicing approach, with a previous SM calculation and find broad agreement. Focusing on the sensitivity of our results to the slicing parameter, we show that using our setup we are able to compute NNLO cross sections with numerical uncertainties of about 0.1%, which is small compared to residual scale uncertainties of a few percent. We study potential improvements using two different jettiness definitions and the inclusion of power corrections. At √{s}=13 TeV we present phenomenological results and consider Zγ as a background to H → Zγ production. We find that, with typical cuts, the inclusion of NNLO corrections represents a small effect and loosens the extraction of limits on anomalous couplings by about 10%.
NASA Technical Reports Server (NTRS)
Muller, Dagmar; Krasemann, Hajo; Brewin, Robert J. W.; Deschamps, Pierre-Yves; Doerffer, Roland; Fomferra, Norman; Franz, Bryan A.; Grant, Mike G.; Groom, Steve B.; Melin, Frederic;
2015-01-01
The Ocean Colour Climate Change Initiative intends to provide a long-term time series of ocean colour data and investigate the detectable climate impact. A reliable and stable atmospheric correction procedure is the basis for ocean colour products of the necessary high quality. In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite derived water leaving reflectance spectra, is extended by a ranking system. In principle, the statistical parameters such as root mean square error, bias, etc. and measures of goodness of fit, are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results. Although the presented methodology is intended to be used in an algorithm selection process, this paper focusses on the scope of the methodology rather than the properties of the individual processors.
NASA Astrophysics Data System (ADS)
Sun, Jiasong; Zhang, Yuzhen; Chen, Qian; Zuo, Chao
2017-02-01
Fourier ptychographic microscopy (FPM) is a newly developed super-resolution technique, which employs angularly varying illuminations and a phase retrieval algorithm to surpass the diffraction limit of a low numerical aperture (NA) objective lens. In current FPM imaging platforms, accurate knowledge of LED matrix's position is critical to achieve good recovery quality. Furthermore, considering such a wide field-of-view (FOV) in FPM, different regions in the FOV have different sensitivity of LED positional misalignment. In this work, we introduce an iterative method to correct position errors based on the simulated annealing (SA) algorithm. To improve the efficiency of this correcting process, large number of iterations for several images with low illumination NAs are firstly implemented to estimate the initial values of the global positional misalignment model through non-linear regression. Simulation and experimental results are presented to evaluate the performance of the proposed method and it is demonstrated that this method can both improve the quality of the recovered object image and relax the LED elements' position accuracy requirement while aligning the FPM imaging platforms.
NASA Astrophysics Data System (ADS)
Ocampo Giraldo, L.; Bolotnikov, A. E.; Camarda, G. S.; De Geronimo, G.; Fried, J.; Gul, R.; Hodges, D.; Hossain, A.; Ünlü, K.; Vernon, E.; Yang, G.; James, R. B.
2018-03-01
We evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enabling use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 μm (650 nm) to scan over a selected 3 × 3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.
MR coil sensitivity inhomogeneity correction for plaque characterization in carotid arteries
NASA Astrophysics Data System (ADS)
Salvado, Olivier; Hillenbrand, Claudia; Suri, Jasjit; Wilson, David L.
2004-05-01
We are involved in a comprehensive program to characterize atherosclerotic disease using multiple MR images having different contrast mechanisms (T1W, T2W, PDW, magnetization transfer, etc.) of human carotid and animal model arteries. We use specially designed intravascular and surface array coils that give high signal-to-noise but suffer from sensitivity inhomogeneity. With carotid surface coils, challenges include: (1) a steep bias field with an 80% change; (2) presence of nearby muscular structures lacking high frequency information to distinguish bias from anatomical features; (3) many confounding zero-valued voxels subject to fat suppression, blood flow cancellation, or air, which are not subject to coil sensitivity; and (4) substantial noise. Bias was corrected using a modification of the adaptive fuzzy c-mean method reported by Pham et al. (IEEE TMI, 18:738-752), whereby a bias field modeled as a mechanical membrane was iteratively improved until cluster means no longer changed. Because our images were noisy, we added a noise reduction filtering step between iterations and used about 5 classes. In a digital phantom having a bias field measured from our MR system, variations across an area comparable to a carotid artery were reduced from 50% to <5% with processing. Human carotid images were qualitatively improved and large regions of skeletal muscle were relatively flat. Other commonly applied techniques failed to segment the images or introduced strong edge artifacts. Current evaluations include comparisons to bias as measured by a body coil in human MR images.
Giraldo, L. Ocampo; Bolotnikov, A. E.; Camarda, G. S.; ...
2017-12-18
Here, we evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μμm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enablingmore » use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 m (650 nm) to scan over a selected 3×3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giraldo, L. Ocampo; Bolotnikov, A. E.; Camarda, G. S.
Here, we evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μμm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enablingmore » use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 m (650 nm) to scan over a selected 3×3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.« less
Process for manufacturing shell membrane force and deflection sensor
NASA Technical Reports Server (NTRS)
Park, Yong-Lae (Inventor); Moslehi, Behzad (Inventor); Black, Richard James (Inventor); Cutkosky, Mark R. (Inventor); Chau, Kelvin K. (Inventor)
2012-01-01
A sensor for force is formed from an elastomeric cylinder having a region with apertures. The apertures have passageways formed between them, and an optical fiber is introduced into these passageways, where the optical fiber has a grating for measurement of tension positioned in the passageways between apertures. Optionally, a temperature measurement sensor is placed in or around the elastomer for temperature correction, and if required, a copper film may be deposited in the elastomer for reduced sensitivity to spot temperature variations in the elastomer near the sensors.
Spin polarisation of tt¯γγ production at NLO+PS with GoSam interfaced to MadGraph5_aMC@NLO
van Deurzen, Hans; Frederix, Rikkert; Hirschi, Valentin; ...
2016-04-22
Here, we present an interface between the multipurpose Monte Carlo tool MadGraph5_aMC@NLO and the automated amplitude generator GoSam. As a first application of this novel framework, we compute the NLO corrections to pp→ tt¯H and pp→ tt¯γγ matched to a parton shower. In the phenomenological analyses of these processes, we focus our attention on observables which are sensitive to the polarisation of the top quarks.
Spin polarisation of tt¯γγ production at NLO+PS with GoSam interfaced to MadGraph5_aMC@NLO
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Deurzen, Hans; Frederix, Rikkert; Hirschi, Valentin
Here, we present an interface between the multipurpose Monte Carlo tool MadGraph5_aMC@NLO and the automated amplitude generator GoSam. As a first application of this novel framework, we compute the NLO corrections to pp→ tt¯H and pp→ tt¯γγ matched to a parton shower. In the phenomenological analyses of these processes, we focus our attention on observables which are sensitive to the polarisation of the top quarks.
Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc
2014-09-15
Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.
Error Detection and Correction in Spelling.
ERIC Educational Resources Information Center
Lydiatt, Steve
1984-01-01
Teachers can discover students' means of dealing with spelling as a problem through investigations of their error detection and correction skills. Approaches for measuring sensitivity and bias are described, as are means of developing appropriate instructional activities. (CL)
Werfel, Krystal L.; Krimm, Hannah
2015-01-01
The purpose of this study was to examine the utility of the Spelling Sensitivity Score (SSS) beyond percentage correct scoring in analysing the spellings of children with specific language impairment (SLI). Participants were 31 children with SLI and 28 children with typical language in grades 2 through 4. Spellings of individual words were scored using two methods: (a) percentage correct and (b) SSS. Children with SLI scored lower than children with typical language when spelling was analysed with percentage correct scoring and with SSS scoring. Additionally, SSS scoring highlighted group differences in the nature of spelling errors. Children with SLI were more likely than children with typical language to omit elements and to represent elements with an illegal grapheme in words, whereas children with typical language were more likely than children with SLI to represent all elements with correct letters. PMID:26413194
Applying cognitive acuity theory to the development and scoring of situational judgment tests.
Leeds, J Peter
2017-11-09
The theory of cognitive acuity (TCA) treats the response options within items as signals to be detected and uses psychophysical methods to estimate the respondents' sensitivity to these signals. Such a framework offers new methods to construct and score situational judgment tests (SJT). Leeds (2012) defined cognitive acuity as the capacity to discern correctness and distinguish between correctness differences among simultaneously presented situation-specific response options. In this study, SJT response options were paired in order to offer the respondent a two-option choice. The contrast in correctness valence between the two options determined the magnitude of signal emission, with larger signals portending a higher probability of detection. A logarithmic relation was found between correctness valence contrast (signal stimulus) and its detectability (sensation response). Respondent sensitivity to such signals was measured and found to be related to the criterion variables. The linkage between psychophysics and elemental psychometrics may offer new directions for measurement theory.
NASA Astrophysics Data System (ADS)
Chen, Chun-Chi; Lin, Shih-Hao; Lin, Yi
2014-06-01
This paper proposes a time-domain CMOS smart temperature sensor featuring on-chip curvature correction and one-point calibration support for thermal management systems. Time-domain inverter-based temperature sensors, which exhibit the advantages of low power and low cost, have been proposed for on-chip thermal monitoring. However, the curvature is large for the thermal transfer curve, which substantially affects the accuracy as the temperature range increases. Another problem is that the inverter is sensitive to process variations, resulting in difficulty for the sensors to achieve an acceptable accuracy for one-point calibration. To overcome these two problems, a temperature-dependent oscillator with curvature correction is proposed to increase the linearity of the oscillatory width, thereby resolving the drawback caused by a costly off-chip second-order master curve fitting. For one-point calibration support, an adjustable-gain time amplifier was adopted to eliminate the effect of process variations, with the assistance of a calibration circuit. The proposed circuit occupied a small area of 0.073 mm2 and was fabricated in a TSMC CMOS 0.35-μm 2P4M digital process. The linearization of the oscillator and the effect cancellation of process variations enabled the sensor, which featured a fixed resolution of 0.049 °C/LSB, to achieve an optimal inaccuracy of -0.8 °C to 1.2 °C after one-point calibration of 12 test chips from -40 °C to 120 °C. The power consumption was 35 μW at a sample rate of 10 samples/s.
Kunakorn, M; Raksakai, K; Pracharktam, R; Sattaudom, C
1999-03-01
Our experiences from 1993 to 1997 in the development and use of IS6110 base PCR for the diagnosis of extrapulmonary tuberculosis in a routine clinical setting revealed that error-correcting processes can improve existing diagnostic methodology. The reamplification method initially used had a sensitivity of 90.91% and a specificity of 93.75%. The concern was focused on the false positive results of this method caused by product-carryover contamination. This method was changed to single round PCR with carryover prevention by uracil DNA glycosylase (UDG), resulting in a 100% specificity but only 63% sensitivity. Dot blot hybridization was added after the single round PCR, increasing the sensitivity to 87.50%. However, false positivity resulted from the nonspecific dot blot hybridization signal, reducing the specificity to 89.47%. The hybridization of PCR was changed to a Southern blot with a new oligonucleotide probe giving the sensitivity of 85.71% and raising the specificity to 99.52%. We conclude that the PCR protocol for routine clinical use should include UDG for carryover prevention and hybridization with specific probes to optimize diagnostic sensitivity and specificity in extrapulmonary tuberculosis testing.
Uncooled IR imager with 5-mK NEDT
NASA Astrophysics Data System (ADS)
Amantea, Robert; Knoedler, C. M.; Pantuso, Francis P.; Patel, Vipulkumar; Sauer, Donald J.; Tower, John R.
1997-08-01
The bi-material concept for room-temperature infrared imaging has the potential of reaching an NE(Delta) T approaching the theoretical limit because of its high responsivity and low noise. The approach, which is 100% compatible with silicon IC foundry processing, utilizes a novel combination of surface micromachining and conventional integrated circuits to produce a bimaterial thermally sensitive element that controls the position of a capacitive plate coupled to the input of a low noise MOS amplifier. This approach can achieve the high sensitivity, the low weight, and the low cost necessary for equipment such as helmet mounted IR viewers and IR rifle sights. The pixel design has the following benefits: (1) an order of magnitude improvement in NE(Delta) T due to extremely high sensitivity and low noise, (2) low cost due to 100% silicon IC compatibility, (3) high image quality and increased yield due to ability to do offset and sensitivity corrections on the imager, pixel-by-pixel; (4) no cryogenic cooler and no high vacuum processing; and (5) commercial applications such as law enforcement, home security, and transportation safety. Two designs are presented. One is a 50 micrometer pixel using silicon nitride as the thermal isolation element that can achieve 5 mK NE(Delta) T; the other is a 29 micrometer pixel using silicon carbide that provides much higher thermal isolation and can achieve 10 mK NE(Delta) T.
Duan, Lingyan; D'hooge, Dagmar R; Spoerk, Martin; Cornillie, Pieter; Cardon, Ludwig
2018-05-29
Highly sensitive conductive polymer composites (CPCs) are designed, employing a facile and low-cost extrusion manufacturing process for both low and high strain sensing in the field of e.g. structural health/damage monitoring and human body movement tracking. Focus is on the morphology control for extrusion processed carbon black (CB)-filled CPCs, utilizing binary and ternary composites based on thermoplastic polyurethane (TPU) and olefin block copolymer (OBC). The relevance of the correct CB amount, kinetic control through a variation of the compounding sequence, and thermodynamic control induced by annealing is highlighted, considering a wide range of experimental (e.g. static and dynamic resistance/SEM/rheological measurements) and theoretical analyses. High CB mass fractions (20 m%) are needed for OBC (or TPU)-CB binary composites but only lead to an intermediate sensitivity as their conductive network is fully-packed and therefore difficult to be truly destructed. Annealing is needed to enable a monotonic increase of the relative resistance with respect to strain. With ternary composites a much higher sensitivity with a clearer monotonic increase results provided that a low CB mass fraction (10-16 m%) is used and annealing is applied. In particular, with CB first dispersed in OBC and annealing a less compact, hence, brittle conductive network (10-12 m% CB) is obtained, allowing high performance sensing.
NASA Astrophysics Data System (ADS)
Sun, Phillip Z.; Zhou, Iris Y.; Igarashi, Takahiro; Guo, Yingkun; Xiao, Gang; Wu, Renhua
2015-03-01
Chemical exchange saturation transfer (CEST) MRI is sensitive to dilute exchangeable protons and local properties such as pH and temperate, yet its susceptibility to field inhomogeneity limits its in vivo applications. Particularly, CEST measurement varies with RF irradiation power, the dependence of which is complex due to concomitant direct RF saturation (RF spillover) effect. Because the volume transmitters provide relatively homogeneous RF field, they have been conventionally used for CEST imaging despite of their elevated specific absorption rate (SAR) and relatively low sensitivity than surface coils. To address this limitation, we developed an efficient B1 inhomogeneity correction algorithm that enables CEST MRI using surface transceiver coils. This is built on recent work that showed the inverse CEST asymmetry analysis (CESTRind) is not susceptible to confounding RF spillover effect. We here postulated that the linear relationship between RF power level and CESTRind can be extended for correcting B1 inhomogeneity induced CEST MRI artifacts. Briefly, we prepared a tissue-like Creatine gel pH phantom and collected multiparametric MRI including relaxation, field map and CEST MRI under multiple RF power levels, using a conventional surface transceiver coil. The raw CEST images showed substantial heterogeneity due to B1 inhomogeneity, with pH contrast to noise ratio (CNR) being 8.8. In comparison, pH MRI CNR of the fieldinhomogeneity corrected CEST MRI was found to be 17.2, substantially higher than that without correction. To summarize, our study validated an efficient field inhomogeneity correction that enables sensitive CEST MRI with surface transceiver, promising for in vivo translation.
Impact of Feedback on Three Phases of Performance Monitoring
Appelgren, Alva; Penny, William; Bengtsson, Sara L
2013-01-01
We investigated if certain phases of performance monitoring show differential sensitivity to external feedback and thus rely on distinct mechanisms. The phases of interest were: the error phase (FE), the phase of the correct response after errors (FEC), and the phase of correct responses following corrects (FCC). We tested accuracy and reaction time (RT) on 12 conditions of a continuous-choice-response task; the 2-back task. External feedback was either presented or not in FE and FEC, and delivered on 0%, 20%, or 100% of FCC trials. The FCC20 was matched to FE and FEC in the number of sounds received so that we could investigate when external feedback was most valuable to the participants. We found that external feedback led to a reduction in accuracy when presented on all the correct responses. Moreover, RT was significantly reduced for FCC100, which in turn correlated with the accuracy reduction. Interestingly, the correct response after an error was particularly sensitive to external feedback since accuracy was reduced when external feedback was presented during this phase but not for FCC20. Notably, error-monitoring was not influenced by feedback-type. The results are in line with models suggesting that the internal error-monitoring system is sufficient in cognitively demanding tasks where performance is ∼ 80%, as well as theories stipulating that external feedback directs attention away from the task. Our data highlight the first correct response after an error as particularly sensitive to external feedback, suggesting that important consolidation of response strategy takes place here. PMID:24217138
Intensity correction for multichannel hyperpolarized 13C imaging of the heart.
Dominguez-Viqueira, William; Geraghty, Benjamin J; Lau, Justin Y C; Robb, Fraser J; Chen, Albert P; Cunningham, Charles H
2016-02-01
Develop and test an analytic correction method to correct the signal intensity variation caused by the inhomogeneous reception profile of an eight-channel phased array for hyperpolarized (13) C imaging. Fiducial markers visible in anatomical images were attached to the individual coils to provide three dimensional localization of the receive hardware with respect to the image frame of reference. The coil locations and dimensions were used to numerically model the reception profile using the Biot-Savart Law. The accuracy of the coil sensitivity estimation was validated with images derived from a homogenous (13) C phantom. Numerical coil sensitivity estimates were used to perform intensity correction of in vivo hyperpolarized (13) C cardiac images in pigs. In comparison to the conventional sum-of-squares reconstruction, improved signal uniformity was observed in the corrected images. The analytical intensity correction scheme was shown to improve the uniformity of multichannel image reconstruction in hyperpolarized [1-(13) C]pyruvate and (13) C-bicarbonate cardiac MRI. The method is independent of the pulse sequence used for (13) C data acquisition, simple to implement and does not require additional scan time, making it an attractive technique for multichannel hyperpolarized (13) C MRI. © 2015 Wiley Periodicals, Inc.
Objective measures of situation awareness in a simulated medical environment
Wright, M; Taekman, J; Endsley, M
2004-01-01
One major limitation in the use of human patient simulators is a lack of objective, validated measures of human performance. Objective measures are necessary if simulators are to be used to evaluate the skills and training of medical practitioners and teams or to evaluate the impact of new processes or equipment design on overall system performance. Situation awareness (SA) refers to a person's perception and understanding of their dynamic environment. This awareness and comprehension is critical in making correct decisions that ultimately lead to correct actions in medical care settings. An objective measure of SA may be more sensitive and diagnostic than traditional performance measures. This paper reviews a theory of SA and discusses the methods required for developing an objective measure of SA within the context of a simulated medical environment. Analysis and interpretation of SA data for both individual and team performance in health care are also presented. PMID:15465958
Gesturing Gives Children New Ideas About Math
Goldin-Meadow, Susan; Cook, Susan Wagner; Mitchell, Zachary A.
2009-01-01
How does gesturing help children learn? Gesturing might encourage children to extract meaning implicit in their hand movements. If so, children should be sensitive to the particular movements they produce and learn accordingly. Alternatively, all that may matter is that children move their hands. If so, they should learn regardless of which movements they produce. To investigate these alternatives, we manipulated gesturing during a math lesson. We found that children required to produce correct gestures learned more than children required to produce partially correct gestures, who learned more than children required to produce no gestures. This effect was mediated by whether children took information conveyed solely in their gestures and added it to their speech. The findings suggest that body movements are involved not only in processing old ideas, but also in creating new ones. We may be able to lay foundations for new knowledge simply by telling learners how to move their hands. PMID:19222810
NASA Astrophysics Data System (ADS)
Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang
2018-03-01
Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.
2014-10-15
Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less
Pressure-Sensitive Paint Measurements on Surfaces with Non-Uniform Temperature
NASA Technical Reports Server (NTRS)
Bencic, Timothy J.
1999-01-01
Pressure-sensitive paint (PSP) has become a useful tool to augment conventional pressure taps in measuring the surface pressure distribution of aerodynamic components in wind tunnel testing. While the PSP offers the advantage of a non-intrusive global mapping of the surface pressure, one prominent drawback to the accuracy of this technique is the inherent temperature sensitivity of the coating's luminescent intensity. A typical aerodynamic surface PSP test has relied on the coated surface to be both spatially and temporally isothermal, along with conventional instrumentation for an in situ calibration to generate the highest accuracy pressure mappings. In some tests however, spatial and temporal thermal gradients are generated by the nature of the test as in a blowing jet impinging on a surface. In these cases, the temperature variations on the painted surface must be accounted for in order to yield high accuracy and reliable data. A new temperature correction technique was developed at NASA Lewis to collapse a "family" of PSP calibration curves to a single intensity ratio versus pressure curve. This correction allows a streamlined procedure to be followed whether or not temperature information is used in the data reduction of the PSP. This paper explores the use of conventional instrumentation such as thermocouples and pressure taps along with temperature-sensitive paint (TSP) to correct for the thermal gradients that exist in aeropropulsion PSP tests. Temperature corrected PSP measurements for both a supersonic mixer ejector and jet cavity interaction tests are presented.
Accommodative Lag by Autorefraction and Two Dynamic Retinoscopy Methods
2008-01-01
Purpose To evaluate two clinical procedures, MEM and Nott retinoscopy, for detecting accommodative lags 1.00 diopter (D) or greater in children as identified by an open-field autorefractor. Methods 168 children 8 to <12 years old with low myopia, normal visual acuity, and no strabismus participated as part of an ancillary study within the screening process for a randomized trial. Accommodative response to a 3.00 D demand was first assessed by MEM and Nott retinoscopy, viewing binocularly with spherocylindrical refractive error corrected, with testing order randomized and each performed by a different masked examiner. The response was then determined viewing monocularly with spherical equivalent refractive error corrected, using an open-field autorefractor, which was the gold standard used for eligibility for the clinical trial. Sensitivity and specificity for accommodative lags of 1.00 D or more were calculated for each retinoscopy method compared to the autorefractor. Results 116 (69%) of the 168 children had accommodative lag of 1.00 D or more by autorefraction. MEM identified 66 children identified by autorefraction for a sensitivity of 57% (95% CI = 47% to 66%) and a specificity of 63% (95% CI = 49% to 76%). Nott retinoscopy identified 35 children for a sensitivity of 30% (95% CI = 22% to 39%) and a specificity of 81% (95% CI = 67% to 90%). Analysis of receiver operating characteristic (ROC) curves constructed for MEM and for Nott retinoscopy failed to reveal alternate cut points that would improve the combination of sensitivity and specificity for identifying accommodative lag ≥ 1.00 D as defined by autorefraction. Conclusions Neither MEM nor Nott retinoscopy provided adequate sensitivity and specificity to identify myopic children with accommodative lag ≥ 1.00 D as determined by autorefraction. A variety of methodological differences between the techniques may contribute to the modest to poor agreement. PMID:19214130
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
Jäkel, Evelyn; den Outer, Peter N; Tax, Rick B; Görts, Peter C; Reinen, Henk A J M
2007-07-10
To establish trends in surface ultraviolet radiation levels, accurate and stable long-term measurements are required. The accuracy level of today's measurements has become high enough to notice even smaller effects that influence instrument sensitivity. Laboratory measurements of the sensitivity of the entrance optics have shown a decrease of as much as 0.07-0.1%/deg temperature increase. Since the entrance optics can heat to greater than 45 degrees C in Dutch summers, corrections are necessary. A method is developed to estimate the entrance optics temperatures from pyranometer measurements and meteorological data. The method enables us to correct historic data records for which temperature information is not available. The temperature retrieval method has an uncertainty of less than 2.5 degrees C, resulting in a 0.3% uncertainty in the correction to be performed. The temperature correction improves the agreement between modeled and measured doses and instrument intercomparison as performed within the Quality Assurance of Spectral Ultraviolet Measurements in Europe project. The retrieval method is easily transferable to other instruments.
NASA Astrophysics Data System (ADS)
Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna
2013-08-01
Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.
Energy dependence corrections to MOSFET dosimetric sensitivity.
Cheung, T; Butson, M J; Yu, P K N
2009-03-01
Metal Oxide Semiconductor Field Effect Transistors (MOSFET's) are dosimeters which are now frequently utilized in radiotherapy treatment applications. An improved MOSFET, clinical semiconductor dosimetry system (CSDS) which utilizes improved packaging for the MOSFET device has been studied for energy dependence of sensitivity to x-ray radiation measurement. Energy dependence from 50 kVp to 10 MV x-rays has been studied and found to vary by up to a factor of 3.2 with 75 kVp producing the highest sensitivity response. The detectors average life span in high sensitivity mode is energy related and ranges from approximately 100 Gy for 75 kVp x-rays to approximately 300 Gy at 6 MV x-ray energy. The MOSFET detector has also been studied for sensitivity variations with integrated dose history. It was found to become less sensitive to radiation with age and the magnitude of this effect is dependant on radiation energy with lower energies producing a larger sensitivity reduction with integrated dose. The reduction in sensitivity is however approximated reproducibly by a slightly non linear, second order polynomial function allowing corrections to be made to readings to account for this effect to provide more accurate dose assessments both in phantom and in-vivo.
Peptide de novo sequencing of mixture tandem mass spectra.
Gorshkov, Vladimir; Hotta, Stéphanie Yuki Kolbeck; Verano-Braga, Thiago; Kjeldsen, Frank
2016-09-01
The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co-isolation and thus prone to false identifications. The deconvolution approach matched complementary b-, y-ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co-isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20-35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An active co-phasing imaging testbed with segmented mirrors
NASA Astrophysics Data System (ADS)
Zhao, Weirui; Cao, Genrui
2011-06-01
An active co-phasing imaging testbed with high accurate optical adjustment and control in nanometer scale was set up to validate the algorithms of piston and tip-tilt error sensing and real-time adjusting. Modularization design was adopted. The primary mirror was spherical and divided into three sub-mirrors. One of them was fixed and worked as reference segment, the others were adjustable respectively related to the fixed segment in three freedoms (piston, tip and tilt) by using sensitive micro-displacement actuators in the range of 15mm with a resolution of 3nm. The method of twodimension dispersed fringe analysis was used to sense the piston error between the adjacent segments in the range of 200μm with a repeatability of 2nm. And the tip-tilt error was gained with the method of centroid sensing. Co-phasing image could be realized by correcting the errors measured above with the sensitive micro-displacement actuators driven by a computer. The process of co-phasing error sensing and correcting could be monitored in real time by a scrutiny module set in this testbed. A FISBA interferometer was introduced to evaluate the co-phasing performance, and finally a total residual surface error of about 50nm rms was achieved.
NASA Astrophysics Data System (ADS)
Piao, Lin; Fu, Zuntao
2016-11-01
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
Piao, Lin; Fu, Zuntao
2016-11-09
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
Comparison of methods for measurement and retrieval of SIF with tower based sensors
NASA Astrophysics Data System (ADS)
Kornfeld, A.; Berry, J. A.
2017-12-01
As the popularity of solar induced fluorescence (SIF) measurement increases, the number of ways to measure and process the data has also increased, leaving a bewildering array of choices for the practitioner. To help clarify the advantages and disadvantages of several methods, we modified our foreoptic, Rotaprism, to measure spectra using either bi-hemispheric (cosine correcting diffusers on both upward and downward views) or hemispherical-conical views (only the upward view is cosine corrected). To test spatial sensitivity of each optic, we recorded data after moving the device relatively short distances - 1-2x the sensor's height above the canopy. When using conical measurements, measured SIF varied by as much as 100% across locations, whereas bi-hemispherical measurements were nearly unaffected by the moves. Reflectance indexes such as NDVI, PRI, NIRv were also spatially sensitive for the conical measurements. We also compared retrievals using either the O2A band or the adjacent Fraunhofer band to examine the relative advantages of each retrieval band for full-day retrievals. Finally, we investigated how choice of retrieval algorithm (SVD, FLD, SFM) affects the computed results. The primary site for this experiment was a California bunchgrass/tallgrass field. Additional data from the Brazilian Amazon will also be used, where appropriate, to support our conclusions.
Fission properties of superheavy nuclei for r -process calculations
NASA Astrophysics Data System (ADS)
Giuliani, Samuel A.; Martínez-Pinedo, Gabriel; Robledo, Luis M.
2018-03-01
We computed a new set of static fission properties suited for r -process calculations. The potential energy surfaces and collective inertias of 3640 nuclei in the superheavy region are obtained from self-consistent mean-field calculations using the Barcelona-Catania-Paris-Madrid energy density functional. The fission path is computed as a function of the quadrupole moment by minimizing the potential energy and exploring octupole and hexadecapole deformations. The spontaneous fission lifetimes are evaluated employing different schemes for the collective inertias and vibrational energy corrections. This allows us to explore the sensitivity of the lifetimes to those quantities together with the collective ground-state energy along the superheavy landscape. We computed neutron-induced stellar reaction rates relevant for r -process nucleosynthesis using the Hauser-Feshbach statistical approach and study the impact of collective inertias. The competition between different reaction channels including neutron-induced rates, spontaneous fission, and α decay is discussed for typical r -process conditions.
Wan, Boyong; Zordan, Christopher A; Lu, Xujin; McGeorge, Gary
2016-10-01
Complete dissolution of the active pharmaceutical ingredient (API) is critical in the manufacturing of liquid-filled soft-gelatin capsules (SGC). Attenuated total reflectance UV spectroscopy (ATR-UV) and Raman spectroscopy have been investigated for in-line monitoring of API dissolution during manufacturing of an SGC product. Calibration models have been developed with both techniques for in-line determination of API potency. Performance of both techniques was evaluated and compared. The ATR-UV methodology was found to be able to monitor the dissolution process and determine the endpoint, but was sensitive to temperature variations. The Raman technique was also capable of effectively monitoring the process and was more robust to the temperature variation and process perturbations by using an excipient peak for internal correction. Different data preprocessing methodologies were explored in an attempt to improve method performance.
Bracken, Robert E.; Brown, Philip J.
2006-01-01
On March 12, 2003, data were gathered at Yuma Proving Grounds, in Arizona, using a Tensor Magnetic Gradiometer System (TMGS). This report shows how these data were processed and explains concepts required for successful TMGS data reduction. Important concepts discussed include extreme attitudinal sensitivity of vector measurements, low attitudinal sensitivity of gradient measurements, leakage of the common-mode field into gradient measurements, consequences of thermal drift, and effects of field curvature. Spatial-data collection procedures and a spin-calibration method are addressed. Discussions of data-reduction procedures include tracking of axial data by mathematically matching transfer functions among the axes, derivation and application of calibration coefficients, calculation of sensor-pair gradients, thermal-drift corrections, and gradient collocation. For presentation, the magnetic tensor at each data station is converted to a scalar quantity, the I2 tensor invariant, which is easily found by calculating the determinant of the tensor. At important processing junctures, the determinants for all stations in the mapped area are shown in shaded relief map-view. Final processed results are compared to a mathematical model to show the validity of the assumptions made during processing and the reasonableness of the ultimate answer obtained.
Novel MRF fluid for ultra-low roughness optical surfaces
NASA Astrophysics Data System (ADS)
Dumas, Paul; McFee, Charles
2014-08-01
Over the past few years there have been an increasing number of applications calling for ultra-low roughness (ULR) surfaces. A critical demand has been driven by EUV optics, EUV photomasks, X-Ray, and high energy laser applications. Achieving ULR results on complex shapes like aspheres and X-Ray mirrors is extremely challenging with conventional polishing techniques. To achieve both tight figure and roughness specifications, substrates typically undergo iterative global and local polishing processes. Typically the local polishing process corrects the figure or flatness but cannot achieve the required surface roughness, whereas the global polishing process produces the required roughness but degrades the figure. Magnetorheological Finishing (MRF) is a local polishing technique based on a magnetically-sensitive fluid that removes material through a shearing mechanism with minimal normal load, thus removing sub-surface damage. The lowest surface roughness produced by current MRF is close to 3 Å RMS. A new ULR MR fluid uses a nano-based cerium as the abrasive in a proprietary aqueous solution, the combination of which reliably produces under 1.5Å RMS roughness on Fused Silica as measured by atomic force microscopy. In addition to the highly convergent figure correction achieved with MRF, we show results of our novel MR fluid achieving <1.5Å RMS roughness on fused silica and other materials.
Effects of Vocabulary Size on Online Lexical Processing by Preschoolers.
Law, Franzo; Edwards, Jan R
This study was designed to investigate the relationship between vocabulary size and the speed and accuracy of lexical processing in preschoolers between the ages of 30-46 months using an automatic eye tracking task based on the looking-while-listening paradigm (Fernald, Zangl, Portillo, & Marchman, 2008) and mispronunciation paradigm (White & Morgan, 2008). Children's eye gaze patterns were tracked while they looked at two pictures (one familiar object, one unfamiliar object) on a computer screen and simultaneously heard one of three kinds of auditory stimuli: correct pronunciations of the familiar object's name, one-feature mispronunciations of the familiar object's name, or a nonword. The results showed that children with larger expressive vocabularies, relative to children with smaller expressive vocabularies, were more likely to look to a familiar object upon hearing a correct pronunciation and to an unfamiliar object upon hearing a novel word. Results also showed that children with larger expressive vocabularies were more sensitive to mispronunciations; they were more likely to look toward the unfamiliar object rather than the familiar object upon hearing a one-feature mispronunciation of a familiar object-name. These results suggest that children with smaller vocabularies, relative to their larger-vocabulary age peers, are at a disadvantage for learning new words, as well as for processing familiar words.
CONCH: A Visual Basic program for interactive processing of ion-microprobe analytical data
NASA Astrophysics Data System (ADS)
Nelson, David R.
2006-11-01
A Visual Basic program for flexible, interactive processing of ion-microprobe data acquired for quantitative trace element, 26Al- 26Mg, 53Mn- 53Cr, 60Fe- 60Ni and U-Th-Pb geochronology applications is described. Default but editable run-tables enable software identification of secondary ion species analyzed and for characterization of the standard used. Counts obtained for each species may be displayed in plots against analysis time and edited interactively. Count outliers can be automatically identified via a set of editable count-rejection criteria and displayed for assessment. Standard analyses are distinguished from Unknowns by matching of the analysis label with a string specified in the Set-up dialog, and processed separately. A generalized routine writes background-corrected count rates, ratios and uncertainties, plus weighted means and uncertainties for Standards and Unknowns, to a spreadsheet that may be saved as a text-delimited file. Specialized routines process trace-element concentration, 26Al- 26Mg, 53Mn- 53Cr, 60Fe- 60Ni, and Th-U disequilibrium analysis types, and U-Th-Pb isotopic data obtained for zircon, titanite, perovskite, monazite, xenotime and baddeleyite. Correction to measured Pb-isotopic, Pb/U and Pb/Th ratios for the presence of common Pb may be made using measured 204Pb counts, or the 207Pb or 208Pb counts following subtraction from these of the radiogenic component. Common-Pb corrections may be made automatically, using a (user-specified) common-Pb isotopic composition appropriate for that on the sample surface, or for that incorporated within the mineral at the time of its crystallization, depending on whether the 204Pb count rate determined for the Unknown is substantially higher than the average 204Pb count rate for all session standards. Pb/U inter-element fractionation corrections are determined using an interactive log e-log e plot of common-Pb corrected 206Pb/ 238U ratios against any nominated fractionation-sensitive species pair (commonly 238U 16O +/ 238U +) for session standards. Also displayed with this plot are calculated Pb/U and Pb/Th calibration line regression slopes, y-intercepts, calibration uncertainties, standard 204Pb- and 208Pb-corrected 207Pb/ 206Pb dates and other parameters useful for assessment of the calibration-line data. Calibrated data for Unknowns may be automatically grouped according to calculated date and displayed in color on interactive Wetherill Concordia, Tera-Wasserburg Concordia, Linearized Gaussian ("Probability Paper") and Gaussian-summation probability density diagrams.
NASA Astrophysics Data System (ADS)
Boisson, F.; Wimberley, C. J.; Lehnert, W.; Zahra, D.; Pham, T.; Perkins, G.; Hamze, H.; Gregoire, M.-C.; Reilhac, A.
2013-10-01
Monte Carlo-based simulation of positron emission tomography (PET) data plays a key role in the design and optimization of data correction and processing methods. Our first aim was to adapt and configure the PET-SORTEO Monte Carlo simulation program for the geometry of the widely distributed Inveon PET preclinical scanner manufactured by Siemens Preclinical Solutions. The validation was carried out against actual measurements performed on the Inveon PET scanner at the Australian Nuclear Science and Technology Organisation in Australia and at the Brain & Mind Research Institute and by strictly following the NEMA NU 4-2008 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction and count rates, image quality and Derenzo phantom studies. Results showed that PET-SORTEO reliably reproduces the performances of this Inveon preclinical system. In addition, imaging studies showed that the PET-SORTEO simulation program provides raw data for the Inveon scanner that can be fully corrected and reconstructed using the same programs as for the actual data. All correction techniques (attenuation, scatter, randoms, dead-time, and normalization) can be applied on the simulated data leading to fully quantitative reconstructed images. In the second part of the study, we demonstrated its ability to generate fast and realistic biological studies. PET-SORTEO is a workable and reliable tool that can be used, in a classical way, to validate and/or optimize a single PET data processing step such as a reconstruction method. However, we demonstrated that by combining a realistic simulated biological study ([11C]Raclopride here) involving different condition groups, simulation allows one also to assess and optimize the data correction, reconstruction and data processing line flow as a whole, specifically for each biological study, which is our ultimate intent.
A novel method for fabrication of continuous-relief optical elements
NASA Astrophysics Data System (ADS)
Guo, Xiaowei; Du, Jinglei; Chen, Mingyong; Ma, Yanqin; Zhu, Jianhua; Peng, Qinjun; Guo, Yongkang; Du, Chunlei
2005-08-01
A novel method for the fabrication of continuous micro-optical components is presented in this paper. It employs a computer controlled spatial-light-modulator (SLM) as a switchable projection mask and silver-halide sensitized gelatin (SHSG) as recording material. By etching SHSG with enzyme solution, the micro-optical components with relief modulation can be generated through special processing procedures. The principles of digital SLM-based lithography and enzyme etching SHSG are discussed in detail, and microlens arrays, micro axicon-lens arrays and gratings with good profile were achieved. This method is simple, cheap and the aberration in processing procedures can be in-situ corrected in the step of designing mask, so it is a practical method to fabricate continuous profile for low-volume production.
Frequency Response of Pressure Sensitive Paints
NASA Technical Reports Server (NTRS)
Winslow, Neal A.; Carroll, Bruce F.; Setzer, Fred M.
1996-01-01
An experimental method for measuring the frequency response of Pressure Sensitive Paints (PSP) is presented. These results lead to the development of a dynamic correction technique for PSP measurements which is of great importance to the advancement of PSP as a measurement technique. The ability to design such a dynamic corrector is most easily formed from the frequency response of the given system. An example of this correction technique is shown. In addition to the experimental data, an analytical model for the frequency response is developed from the one dimensional mass diffusion equation.
Development of sensitivity to orthographic errors in children: An event-related potential study.
Heldmann, Marcus; Puppe, Svetlana; Effenberg, Alfred O; Münte, Thomas F
2017-09-01
To study the development of orthographic sensitivity during elementary school, we recorded event-related brain potentials (ERPs) from 2nd and 4th grade children who were exposed to line drawing of object or animals upon which the correctly or incorrectly spelled name was superimposed. Stimulus-locked ERPs showed a modulation of a frontocentral negativity between 200 and 500ms which was larger for the 4th grade children but did not show an effect of correctness of spelling. This effect was followed by a pronounced positive shift which was only seen in the 4th grade children and which showed a modulation of spelling correctness. This effect can be seen as an electrophysiological correlate of orthographic sensitivity and replicates earlier findings in adults. Moreover, response-locked ERPs triggered to the children's button presses indicating orthographic (in)-correctness showed a succession of waves including the frontocentral error-related negativity and a subsequent negativity with a more posterior distribution. This latter negativity was generally larger for the 4th grade children. Only for the 4th grade children, this negativity was smaller for the false alarm trials suggesting a conscious registration of the error in these children. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
"Hook"-calibration of GeneChip-microarrays: theory and algorithm.
Binder, Hans; Preibisch, Stephan
2008-08-29
: The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. : We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. : The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
Tian, Tze-Feng; Wang, San-Yuan; Kuo, Tien-Chueh; Tan, Cheng-En; Chen, Guan-Yuan; Kuo, Ching-Hua; Chen, Chi-Hsin Sally; Chan, Chang-Chuan; Lin, Olivia A; Tseng, Y Jane
2016-11-01
Two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) is superior for chromatographic separation and provides great sensitivity for complex biological fluid analysis in metabolomics. However, GC×GC/TOF-MS data processing is currently limited to vendor software and typically requires several preprocessing steps. In this work, we implement a web-based platform, which we call GC 2 MS, to facilitate the application of recent advances in GC×GC/TOF-MS, especially for metabolomics studies. The core processing workflow of GC 2 MS consists of blob/peak detection, baseline correction, and blob alignment. GC 2 MS treats GC×GC/TOF-MS data as pictures and clusters the pixels as blobs according to the brightness of each pixel to generate a blob table. GC 2 MS then aligns the blobs of two GC×GC/TOF-MS data sets according to their distance and similarity. The blob distance and similarity are the Euclidean distance of the first and second retention times of two blobs and the Pearson's correlation coefficient of the two mass spectra, respectively. GC 2 MS also directly corrects the raw data baseline. The analytical performance of GC 2 MS was evaluated using GC×GC/TOF-MS data sets of Angelica sinensis compounds acquired under different experimental conditions and of human plasma samples. The results show that GC 2 MS is an easy-to-use tool for detecting peaks and correcting baselines, and GC 2 MS is able to align GC×GC/TOF-MS data sets acquired under different experimental conditions. GC 2 MS is freely accessible at http://gc2ms.web.cmdm.tw .
Spatiotemporal observation of transport in fractured rocks
NASA Astrophysics Data System (ADS)
Kulenkampff, Johannes; Enzmann, Frieder; Gründig, Marion; Mittmann, Hellmuth; Wolf, Martin
2010-05-01
A number of injection experiments in different rocks types have been conducted with positron emission-process-tomography using a high-resolution "small-animal" PET-scanner (ClearPET by Raytest, Straubenhardt) for the monitoring of transport processes. The fluids are labelled with positron-emitting isotopes like e.g. 18F-, 124I- or dissolvable complexes like K3[58Co(CN)6], without affecting their physico-chemical properties. The annihilation radiation from individual decaying tracer atoms is detected with high sensitivity, and the tomographic reconstruction of the recorded events yields quantitative 3D-images of the tracer distribution. Sequential tomograms during and after tracer injection are used for the spatiotemporal observation of the fluid transport. Raw data is corrected with respect to background radiation (randoms) and Compton scattering, which turns out to be much more significant in rocks than in common biomedical applications. Although in principle these effects are exactly known, we developed and apply simplified and fast correction methods. Deficiencies of these correction algorithms generate some artefacts, that cause the lower limit of the tracer concentration in the order of 1 kBq/?l or about 107 atoms/?l, still outranging other methods (e.g. NMR or resistivity tomography) by many orders of magnitude. New 3D-visualizations of the process-tomograms in fractured rocks show strongly localized and complex flow paths and in parts unexpected deviations from the fracture structures as deduced from ?CT-images. Such results demonstrate the potential of large discrepancies between ?CT-derived parameters like pore volume and specific surface area and the hydraulic effective parameters as derived by means of the PET-process-tomography. We conclude that such discrepancies and the complexity of the transport process in natural heterogeneous porous media illustrates the limits of parameter determination methods from model simulations based on structural pore-space models - in particular as long as the simulations are not verified by experimental data.
A method to account for the temperature sensitivity of TCCON total column measurements
NASA Astrophysics Data System (ADS)
Niebling, Sabrina G.; Wunch, Debra; Toon, Geoffrey C.; Wennberg, Paul O.; Feist, Dietrich G.
2014-05-01
The Total Carbon Column Observing Network (TCCON) consists of ground-based Fourier Transform Spectrometer (FTS) systems all around the world. It achieves better than 0.25% precision and accuracy for total column measurements of CO2 [Wunch et al. (2011)]. In recent years, the TCCON data processing and retrieval software (GGG) has been improved to achieve better and better results (e. g. ghost correction, improved a priori profiles, more accurate spectroscopy). However, a small error is also introduced by the insufficent knowledge of the true temperature profile in the atmosphere above the individual instruments. This knowledge is crucial to retrieve highly precise gas concentrations. In the current version of the retrieval software, we use six-hourly NCEP reanalysis data to produce one temperature profile at local noon for each measurement day. For sites in the mid latitudes which can have a large diurnal variation of the temperature in the lowermost kilometers of the atmosphere, this approach can lead to small errors in the final gas concentration of the total column. Here, we present and describe a method to account for the temperature sensitivity of the total column measurements. We exploit the fact that H2O is most abundant in the lowermost kilometers of the atmosphere where the largest diurnal temperature variations occur. We use single H2O absorption lines with different temperature sensitivities to gain information about the temperature variations over the course of the day. This information is used to apply a posteriori correction of the retrieved gas concentration of total column. In addition, we show that the a posteriori temperature correction is effective by applying it to data from Lamont, Oklahoma, USA (36,6°N and 97,5°W). We chose this site because regular radiosonde launches with a time resolution of six hours provide detailed information of the real temperature in the atmosphere and allow us to test the effectiveness of our correction. References: Wunch, D., Toon, G. C., Blavier, J.-F. L., Washenfelder, R. A., Notholt, J., Connor, B. J., Griffith, D. W. T., Sherlock, V., and Wennberg, P. O.: The Total Carbon Column Observing Network, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369, 2087-2112, 2011.
Optical control of the Advanced Technology Solar Telescope.
Upton, Robert
2006-08-10
The Advanced Technology Solar Telescope (ATST) is an off-axis Gregorian astronomical telescope design. The ATST is expected to be subject to thermal and gravitational effects that result in misalignments of its mirrors and warping of its primary mirror. These effects require active, closed-loop correction to maintain its as-designed diffraction-limited optical performance. The simulation and modeling of the ATST with a closed-loop correction strategy are presented. The correction strategy is derived from the linear mathematical properties of two Jacobian, or influence, matrices that map the ATST rigid-body (RB) misalignments and primary mirror figure errors to wavefront sensor (WFS) measurements. The two Jacobian matrices also quantify the sensitivities of the ATST to RB and primary mirror figure perturbations. The modeled active correction strategy results in a decrease of the rms wavefront error averaged over the field of view (FOV) from 500 to 19 nm, subject to 10 nm rms WFS noise. This result is obtained utilizing nine WFSs distributed in the FOV with a 300 nm rms astigmatism figure error on the primary mirror. Correction of the ATST RB perturbations is demonstrated for an optimum subset of three WFSs with corrections improving the ATST rms wavefront error from 340 to 17.8 nm. In addition to the active correction of the ATST, an analytically robust sensitivity analysis that can be generally extended to a wider class of optical systems is presented.
POCS-enhanced correction of motion artifacts in parallel MRI.
Samsonov, Alexey A; Velikina, Julia; Jung, Youngkyoo; Kholmovski, Eugene G; Johnson, Chris R; Block, Walter F
2010-04-01
A new method for correction of MRI motion artifacts induced by corrupted k-space data, acquired by multiple receiver coils such as phased arrays, is presented. In our approach, a projections onto convex sets (POCS)-based method for reconstruction of sensitivity encoded MRI data (POCSENSE) is employed to identify corrupted k-space samples. After the erroneous data are discarded from the dataset, the artifact-free images are restored from the remaining data using coil sensitivity profiles. The error detection and data restoration are based on informational redundancy of phased-array data and may be applied to full and reduced datasets. An important advantage of the new POCS-based method is that, in addition to multicoil data redundancy, it can use a priori known properties about the imaged object for improved MR image artifact correction. The use of such information was shown to improve significantly k-space error detection and image artifact correction. The method was validated on data corrupted by simulated and real motion such as head motion and pulsatile flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letant, S E; .Ortiz, J I; Tammero, L
2007-04-11
We have developed a nucleic acid-based assay that is rapid, sensitive, specific, and can be used for the simultaneous detection of 5 common human respiratory pathogens including influenza A, influenza B, parainfluenza type 1 and 3, respiratory syncytial virus, and adenovirus group B, C, and E. Typically, diagnosis on an un-extracted clinical sample can be provided in less than 3 hours, including sample collection, preparation, and processing, as well as data analysis. Such a multiplexed panel would enable rapid broad-spectrum pathogen testing on nasal swabs, and therefore allow implementation of infection control measures, and timely administration of antiviral therapies. Thismore » article presents a summary of the assay performance in terms of sensitivity and specificity. Limits of detection are provided for each targeted respiratory pathogen, and result comparisons are performed on clinical samples, our goal being to compare the sensitivity and specificity of the multiplexed assay to the combination of immunofluorescence and shell vial culture currently implemented at the UCDMC hospital. Overall, the use of the multiplexed RT-PCR assay reduced the rate of false negatives by 4% and reduced the rate of false positives by up to 10%. The assay correctly identified 99.3% of the clinical negatives, 97% of adenovirus, 95% of RSV, 92% of influenza B, and 77% of influenza A without any extraction performed on the clinical samples. The data also showed that extraction will be needed for parainfluenza virus, which was only identified correctly 24% of the time on un-extracted samples.« less
A new frequency matching technique for FRF-based model updating
NASA Astrophysics Data System (ADS)
Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng
2017-05-01
Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.
Kundnani, Vishal K; Zhu, Lisa; Tak, HH; Wong, HK
2010-01-01
Background: Multimodal intraoperative neuromonitoring is recommended during corrective spinal surgery, and has been widely used in surgery for spinal deformity with successful outcomes. Despite successful outcomes of corrective surgery due to increased safety of the patients with the usage of spinal cord monitoring in many large spine centers, this modality has not yet achieved widespread popularity. We report the analysis of prospectively collected intraoperative neurophysiological monitoring data of 354 consecutive patients undergoing corrective surgery for adolescent idiopathic scoliosis (AIS) to establish the efficacy of multimodal neuromonitoring and to evaluate comparative sensitivity and specificity. Materials and Methods: The study group consisted of 354 (female = 309; male = 45) patients undergoing spinal deformity corrective surgery between 2004 and 2008. Patients were monitored using electrophysiological methods including somatosensory-evoked potentials and motor-evoked potentials simultaneously. Results: Mean age of patients was 13.6 years (±2.3 years). The operative procedures involved were instrumented fusion of the thoracic/lumbar/both curves, Baseline somatosensory-evoked potentials (SSEP) and neurogenic motor-evoked potentials (NMEP) were recorded successfully in all cases. Thirteen cases expressed significant alert to prompt reversal of intervention. All these 13 cases with significant alert had detectable NMEP alerts, whereas significant SSEP alert was detected in 8 cases. Two patients awoke with new neurological deficit (0.56%) and had significant intraoperative SSEP + NMEP alerts. There were no false positives with SSEP (high specificity) but 5 patients with false negatives with SSEP (38%) reduced its sensitivity. There was no false negative with NMEP but 2 of 13 cases were false positive with NMEP (15%). The specificity of SSEP (100%) is higher than NMEP (96%); however, the sensitivity of NMEP (100%) is far better than SSEP (51%). Due to these results, the overall sensitivity, specificity and positive predictive value of combined multimodality neuromonitoring in this adult deformity series was 100, 98.5 and 85%, respectively. Conclusion: Neurogenic motor-evoked potential (NMEP) monitoring appears to be superior to conventional SSEP monitoring for identifying evolving spinal cord injury. Used in conjunction, the sensitivity and specificity of combined neuromonitoring may reach up to 100%. Multimodality monitoring with SSEP + NMEP should be the standard of care. PMID:20165679
Paloor, S; Aland, T; Mathew, J; Al-Hammadi, N; Hammoud, R
2012-06-01
To report on an initial investigation into the use of optically stimulated luminescent dosimeters (OSLDs) for in-vivo dosimetry for total body irradiation (TBI) treatments. Specifically, we report on the determination of angular dependence, sensitivity correction factors and the dose calibration factors. The OSLD investigated in our work was InLight/OSL nanoDot dosimeters (Landauer Inc.). Nanodots are 5 mm diameter, 0.2 mm thick disk-shaped Carbon-doped Al2O3, and were read using a Landauer InLight microstar reader and associated software.OSLDs were irradiated under two setup conditions: a) typical clinical reference conditions (95cm SSD, 5cm depth in solid water, 10×10 cm field size), and b) TBI conditions (520cm SSD, 5cm depth in solid water, 40×40 cm field size,). The angular dependence was checked for angles ranging ±60 degree from normal incidence. In order to directly compare the sensitivity correction factors, a common dose was delivered to the OSLDs for the two setups. Pre- and post-irradiation readings were acquired. OSLDs were optically annealed under various techniques (1) by keeping over a film view box, (2) Using multiple scan on a flat bed optical scanner and (3) Using natural room light. Under reference conditions, the calculated sensitivity correction factors of the OSLDs had a SD of 2.2% and a range of 5%. Under TBI conditions, the SD increased to 3.4% and the range to 6.0%. The variation in sensitivity correction factors between individual OSLDs across the two measurement conditions was up to 10.3%. Angular dependence of less than 1% is observed. The best bleaching method we found is to keep OSLDs for more than 3 hours on a film viewer which will reduce normalized response to less than 1%. In order to obtain the most accurate results when using OSLDs for in-vivo dosimetry for TBI treatments, sensitivity correction factors and dose calibration factors should all be determined under clinical TBI conditions. © 2012 American Association of Physicists in Medicine.
Does correcting astigmatism with toric lenses improve driving performance?
Cox, Daniel J; Banton, Thomas; Record, Steven; Grabman, Jesse H; Hawkins, Ronald J
2015-04-01
Driving is a vision-based activity of daily living that impacts safety. Because visual disruption can compromise driving safety, contact lens wearers with astigmatism may pose a driving safety risk if they experience residual blur from spherical lenses that do not correct their astigmatism or if they experience blur from toric lenses that rotate excessively. Given that toric lens stabilization systems are continually improving, this preliminary study tested the hypothesis that astigmats wearing toric contact lenses, compared with spherical lenses, would exhibit better overall driving performance and driving-specific visual abilities. A within-subject, single-blind, crossover, randomized design was used to evaluate driving performance in 11 young adults with astigmatism (-0.75 to -1.75 diopters cylinder). Each participant drove a highly immersive, virtual reality driving simulator (210 degrees field of view) with (1) no correction, (2) spherical contact lens correction (ACUVUE MOIST), and (3) toric contact lens correction (ACUVUE MOIST for Astigmatism). Tactical driving skills such as steering, speed management, and braking, as well as operational driving abilities such as visual acuity, contrast sensitivity, and foot and arm reaction time, were quantified. There was a main effect for type of correction on driving performance (p = 0.05). Correction with toric lenses resulted in significantly safer tactical driving performance than no correction (p < 0.05), whereas correction with spherical lenses did not differ in driving safety from no correction (p = 0.118). Operational tests differentiated corrected from uncorrected performance for both spherical (p = 0.008) and toric (p = 0.011) lenses, but they were not sensitive enough to differentiate toric from spherical lens conditions. Given previous research showing that deficits in these tactical skills are predictive of future real-world collisions, these preliminary data suggest that correcting low to moderate astigmatism with toric lenses may be important to driving safety. Their merits relative to spherical lens correction require further investigation.
Radiosondes Corrected for Inaccuracy in RH Measurements
Miloshevich, Larry
2008-01-15
Corrections for inaccuracy in Vaisala radiosonde RH measurements have been applied to ARM SGP radiosonde soundings. The magnitude of the corrections can vary considerably between soundings. The radiosonde measurement accuracy, and therefore the correction magnitude, is a function of atmospheric conditions, mainly T, RH, and dRH/dt (humidity gradient). The corrections are also very sensitive to the RH sensor type, and there are 3 Vaisala sensor types represented in this dataset (RS80-H, RS90, and RS92). Depending on the sensor type and the radiosonde production date, one or more of the following three corrections were applied to the RH data: Temperature-Dependence correction (TD), Contamination-Dry Bias correction (C), Time Lag correction (TL). The estimated absolute accuracy of NIGHTTIME corrected and uncorrected Vaisala RH measurements, as determined by comparison to simultaneous reference-quality measurements from Holger Voemel's (CU/CIRES) cryogenic frostpoint hygrometer (CFH), is given by Miloshevich et al. (2006).
Prenatal Diagnosis of Placenta Accreta: Sonography or Magnetic Resonance Imaging?
Dwyer, Bonnie K.; Belogolovkin, Victoria; Tran, Lan; Rao, Anjali; Carroll, Ian; Barth, Richard; Chitkara, Usha
2009-01-01
Objective The purpose of this study was to compare the accuracy of transabdominal sonography and magnetic resonance imaging (MRI) for prenatal diagnosis of placenta accreta. Methods A historical cohort study was undertaken at 3 institutions identifying women at risk for placenta accreta who had undergone both sonography and MRI prenatally. Sonographic and MRI findings were compared with the final diagnosis as determined at delivery and by pathologic examination. Results Thirty-two patients who had both sonography and MRI prenatally to evaluate for placenta accreta were identified. Of these, 15 had confirmation of placenta accreta at delivery. Sonography correctly identified the presence of placenta accreta in 14 of 15 patients (93% sensitivity; 95% confidence interval [CI], 80%–100%) and the absence of placenta accreta in 12 of 17 patients (71% specificity; 95% CI, 49%–93%). Magnetic resonance imaging correctly identified the presence of placenta accreta in 12 of 15 patients (80% sensitivity; 95% CI, 60%–100%) and the absence of placenta accreta in 11 of 17 patients (65% specificity; 95% CI, 42%–88%). In 7 of 32 cases, sonography and MRI had discordant diagnoses: sonography was correct in 5 cases, and MRI was correct in 2. There was no statistical difference in sensitivity (P = .25) or specificity (P = .5) between sonography and MRI. Conclusions Both sonography and MRI have fairly good sensitivity for prenatal diagnosis of placenta accreta; however, specificity does not appear to be as good as reported in other studies. In the case of inconclusive findings with one imaging modality, the other modality may be useful for clarifying the diagnosis. PMID:18716136
Accuracy of dementia diagnosis: a direct comparison between radiologists and a computerized method.
Klöppel, Stefan; Stonnington, Cynthia M; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D; Mader, Irina; Mitchell, L Anne; Patel, Ameet C; Roberts, Catherine C; Fox, Nick C; Jack, Clifford R; Ashburner, John; Frackowiak, Richard S J
2008-11-01
There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65-95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice.
Accuracy of dementia diagnosis—a direct comparison between radiologists and a computerized method
Stonnington, Cynthia M.; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D.; Mader, Irina; Mitchell, L. Anne; Patel, Ameet C.; Roberts, Catherine C.; Fox, Nick C.; Jack, Clifford R.; Ashburner, John; Frackowiak, Richard S. J.
2008-01-01
There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65–95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice. PMID:18835868
Macera, Annalisa; Lario, Chiara; Petracchini, Massimo; Gallo, Teresa; Regge, Daniele; Floriani, Irene; Ribero, Dario; Capussotti, Lorenzo; Cirillo, Stefano
2013-03-01
To compare the diagnostic accuracy and sensitivity of Gd-EOB-DTPA MRI and diffusion-weighted (DWI) imaging alone and in combination for detecting colorectal liver metastases in patients who had undergone preoperative chemotherapy. Thirty-two consecutive patients with a total of 166 liver lesions were retrospectively enrolled. Of the lesions, 144 (86.8 %) were metastatic at pathology. Three image sets (1, Gd-EOB-DTPA; 2, DWI; 3, combined Gd-EOB-DTPA and DWI) were independently reviewed by two observers. Statistical analysis was performed on a per-lesion basis. Evaluation of image set 1 correctly identified 127/166 lesions (accuracy 76.5 %; 95 % CI 69.3-82.7) and 106/144 metastases (sensitivity 73.6 %, 95 % CI 65.6-80.6). Evaluation of image set 2 correctly identified 108/166 (accuracy 65.1 %, 95 % CI 57.3-72.3) and 87/144 metastases (sensitivity of 60.4 %, 95 % CI 51.9-68.5). Evaluation of image set 3 correctly identified 148/166 (accuracy 89.2 %, 95 % CI 83.4-93.4) and 131/144 metastases (sensitivity 91 %, 95 % CI 85.1-95.1). Differences were statistically significant (P < 0.001). Notably, similar results were obtained analysing only small lesions (<1 cm). The combination of DWI with Gd-EOB-DTPA-enhanced MRI imaging significantly increases the diagnostic accuracy and sensitivity in patients with colorectal liver metastases treated with preoperative chemotherapy, and it is particularly effective in the detection of small lesions.
Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters
NASA Technical Reports Server (NTRS)
Smith, Stephen J.
2008-01-01
We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.
The, Bertram; Flivik, Gunnar; Diercks, Ron L; Verdonschot, Nico
2008-03-01
Wear curves from individual patients often show unexplained irregular wear curves or impossible values (negative wear). We postulated errors of two-dimensional wear measurements are mainly the result of radiographic projection differences. We tested a new method that makes two-dimensional wear measurements less sensitive for radiograph projection differences of cemented THAs. The measurement errors that occur when radiographically projecting a three-dimensional THA were modeled. Based on the model, we developed a method to reduce the errors, thus approximating three-dimensional linear wear values, which are less sensitive for projection differences. An error analysis was performed by virtually simulating 144 wear measurements under varying conditions with and without application of the correction: the mean absolute error was reduced from 1.8 mm (range, 0-4.51 mm) to 0.11 mm (range, 0-0.27 mm). For clinical validation, radiostereometric analysis was performed on 47 patients to determine the true wear at 1, 2, and 5 years. Subsequently, wear was measured on conventional radiographs with and without the correction: the overall occurrence of errors greater than 0.2 mm was reduced from 35% to 15%. Wear measurements are less sensitive to differences in two-dimensional projection of the THA when using the correction method.
Sensitivity to synchronicity of biological motion in normal and amblyopic vision
Luu, Jennifer Y.; Levi, Dennis M.
2017-01-01
Amblyopia is a developmental disorder of spatial vision that results from abnormal early visual experience usually due to the presence of strabismus, anisometropia, or both strabismus and anisometropia. Amblyopia results in a range of visual deficits that cannot be corrected by optics because the deficits reflect neural abnormalities. Biological motion refers to the motion patterns of living organisms, and is normally displayed as points of lights positioned at the major joints of the body. In this experiment, our goal was twofold. We wished to examine whether the human visual system in people with amblyopia retained the higher-level processing capabilities to extract visual information from the synchronized actions of others, therefore retaining the ability to detect biological motion. Specifically, we wanted to determine if the synchronized interaction of two agents performing a dancing routine allowed the amblyopic observer to use the actions of one agent to predict the expected actions of a second agent. We also wished to establish whether synchronicity sensitivity (detection of synchronized versus desynchronized interactions) is impaired in amblyopic observers relative to normal observers. The two aims are differentiated in that the first aim looks at whether synchronized actions result in improved expected action predictions while the second aim quantitatively compares synchronicity sensitivity, or the ratio of desynchronized to synchronized detection sensitivities, to determine if there is a difference between normal and amblyopic observers. Our results show that the ability to detect biological motion requires more samples in both eyes of amblyopes than in normal control observers. The increased sample threshold is not the result of low-level losses but may reflect losses in feature integration due to undersampling in the amblyopic visual system. However, like normal observers, amblyopes are more sensitive to synchronized versus desynchronized interactions, indicating that higher-level processing of biological motion remains intact. We also found no impairment in synchronicity sensitivity in the amblyopic visual system relative to the normal visual system. Since there is no impairment in synchronicity sensitivity in either the nonamblyopic or amblyopic eye of amblyopes, our results suggest that the higher order processing of biological motion is intact. PMID:23474301
Rényi entropy of the totally asymmetric exclusion process
NASA Astrophysics Data System (ADS)
Wood, Anthony J.; Blythe, Richard A.; Evans, Martin R.
2017-11-01
The Rényi entropy is a generalisation of the Shannon entropy that is sensitive to the fine details of a probability distribution. We present results for the Rényi entropy of the totally asymmetric exclusion process (TASEP). We calculate explicitly an entropy whereby the squares of configuration probabilities are summed, using the matrix product formalism to map the problem to one involving a six direction lattice walk in the upper quarter plane. We derive the generating function across the whole phase diagram, using an obstinate kernel method. This gives the leading behaviour of the Rényi entropy and corrections in all phases of the TASEP. The leading behaviour is given by the result for a Bernoulli measure and we conjecture that this holds for all Rényi entropies. Within the maximal current phase the correction to the leading behaviour is logarithmic in the system size. Finally, we remark upon a special property of equilibrium systems whereby discontinuities in the Rényi entropy arise away from phase transitions, which we refer to as secondary transitions. We find no such secondary transition for this nonequilibrium system, supporting the notion that these are specific to equilibrium cases.
Augmented Central Pain Processing in Vulvodynia
Hampson, Johnson P.; Reed, Barbara D.; Clauw, Daniel J.; Bhavsar, Rupal; Gracely, Richard H.; Haefner, Hope K.; Harris, Richard E.
2013-01-01
Vulvodynia (VVD) is a chronic pain disorder, wherein women display sensitivity to evoked stimuli at the vulva and/or spontaneous vulvar pain. Our previous work suggests generalized hyperalgesia in this population, however little is known about central neurobiological factors that may influence pain in VVD. Here we investigated local (vulvar) and remote (thumb) pressure evoked pain processing in 24 VVD patients compared to 13 age-matched, pain-free healthy controls (HC). As a positive control we also examined thumb pressure pain in 24 fibromyalgia (FM) patients. The VVD and FM patients displayed overlapping insular brain activations that were greater than HC, in response to thumb stimulation (P<0.005 corrected). Compared to HC, VVD participants displayed greater levels of activation during thumb stimulation within the insula, dorsal mid-cingulate, posterior cingulate and thalamus (P<0.005 corrected). Significant differences between VVD subgroups (primary versus secondary and provoked versus unprovoked) were seen within the posterior cingulate with thumb stimulation, and within the precuneus region with vulvar stimulation (provoked versus unprovoked only). The augmented brain activation in VVD patients in response to a stimulus remote from the vulva suggests central neural pathology in this disorder. Moreover, differing central activity between VVD subgroups suggests heterogeneous pathologies within this diagnosis. PMID:23578957
NASA Technical Reports Server (NTRS)
Macmillan, Daniel S.; Han, Daesoo
1989-01-01
The attitude of the Nimbus-7 spacecraft has varied significantly over its lifetime. A summary of the orbital and long-term behavior of the attitude angles and the effects of attitude variations on Scanning Multichannel Microwave Radiometer (SMMR) brightness temperatures is presented. One of the principal effects of these variations is to change the incident angle at which the SMMR views the Earth's surface. The brightness temperatures depend upon the incident angle sensitivities of both the ocean surface emissivity and the atmospheric path length. Ocean surface emissivity is quite sensitive to incident angle variation near the SMMR incident angle, which is about 50 degrees. This sensitivity was estimated theoretically for a smooth ocean surface and no atmosphere. A 1-degree increase in the angle of incidence produces a 2.9 C increase in the retrieved sea surface temperature and a 5.7 m/sec decrease in retrieved sea surface wind speed. An incident angle correction is applied to the SMMR radiances before using them in the geophysical parameter retrieval algorithms. The corrected retrieval data is compared with data obtained without applying the correction.
NASA Astrophysics Data System (ADS)
Guillong, M.; Schmitt, A. K.; Bachmann, O.
2015-04-01
Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) of eight zircon reference materials and synthetic zircon-hafnon end-members indicate that corrections for abundance sensitivity and molecular zirconium sesquioxide ions (Zr2O3+) are critical for reliable determination of 230Th abundances in zircon. Other polyatomic interferences in the mass range 223-233 amu are insignificant. When corrected for abundance sensitivity and interferences, activity ratios of (230Th)/(238U) for the zircon reference materials we used average 1.001 ± 0.010 (1σ error; mean square of weighted deviates MSWD = 1.45; n = 8). This includes the 91500 and Plešovice zircons, which were deemed unsuitable for calibration of (230Th)/(238U) by Ito (2014). Uranium series zircon ages generated by LA-ICP-MS without mitigating (e.g., by high mass resolution) or correcting for abundance sensitivity and molecular interferences on 230Th such as those presented by Ito (2014) are potentially unreliable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krzyżanowska, A.; Deptuch, G. W.; Maj, P.
This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operationmore » of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.« less
Modeling bias and variation in the stochastic processes of small RNA sequencing
Etheridge, Alton; Sakhanenko, Nikita; Galas, David
2017-01-01
Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495
NASA Astrophysics Data System (ADS)
Ducoté, Julien; Dettoni, Florent; Bouyssou, Régis; Le-Gratiet, Bertrand; Carau, Damien; Dezauzier, Christophe
2015-03-01
Patterning process control of advanced nodes has required major changes over the last few years. Process control needs of critical patterning levels since 28nm technology node is extremely aggressive showing that metrology accuracy/sensitivity must be finely tuned. The introduction of pitch splitting (Litho-Etch-Litho-Etch) at 14FDSOInm node requires the development of specific metrologies to adopt advanced process control (for CD, overlay and focus corrections). The pitch splitting process leads to final line CD uniformities that are a combination of the CD uniformities of the two exposures, while the space CD uniformities are depending on both CD and OVL variability. In this paper, investigations of CD and OVL process control of 64nm minimum pitch at Metal1 level of 14FDSOI technology, within the double patterning process flow (Litho, hard mask etch, line etch) are presented. Various measurements with SEMCD tools (Hitachi), and overlay tools (KT for Image Based Overlay - IBO, and ASML for Diffraction Based Overlay - DBO) are compared. Metrology targets are embedded within a block instanced several times within the field to perform intra-field process variations characterizations. Specific SEMCD targets were designed for independent measurement of both line CD (A and B) and space CD (A to B and B to A) for each exposure within a single measurement during the DP flow. Based on those measurements correlation between overlay determined with SEMCD and with standard overlay tools can be evaluated. Such correlation at different steps through the DP flow is investigated regarding the metrology type. Process correction models are evaluated with respect to the measurement type and the intra-field sampling.
ERIC Educational Resources Information Center
Khelifi, Rachid; Sparrow, Laurent; Casalis, Severine
2012-01-01
This study aimed at examining sensitivity to lateral linguistic and nonlinguistic information in third and fifth grade readers. A word identification task with a threshold was used, and targets were displayed foveally with or without distractors. Sensitivity to lateral information was inferred from the deterioration of the rate of correct word…
The impact of missing trauma data on predicting massive transfusion
Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.
2013-01-01
INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514
Zeng, Tao; Mao, Wen; Lu, Qing
2016-05-25
Scalp-recorded event-related potentials are known to be sensitive to particular aspects of sentence processing. The N400 component is widely recognized as an effect closely related to lexical-semantic processing. The absence of an N400 effect in participants performing tasks in Indo-European languages has been considered evidence that failed syntactic category processing appears to block lexical-semantic integration and that syntactic structure building is a prerequisite of semantic analysis. An event-related potential experiment was designed to investigate whether such syntactic primacy can be considered to apply equally to Chinese sentence processing. Besides correct middles, sentences with either single semantic or single syntactic violation as well as double syntactic and semantic anomaly were used in the present research. Results showed that both purely semantic and combined violation induced a broad negativity in the time window 300-500 ms, indicating the independence of lexical-semantic integration. These findings provided solid evidence that lexical-semantic parsing plays a crucial role in Chinese sentence comprehension.
Extending 3D Near-Cloud Corrections from Shorter to Longer Wavelengths
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Evans, K. Frank; Varnai, Tamas; Guoyong, Wen
2014-01-01
Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths.
PROMO – Real-time Prospective Motion Correction in MRI using Image-based Tracking
White, Nathan; Roddey, Cooper; Shankaranarayanan, Ajit; Han, Eric; Rettmann, Dan; Santos, Juan; Kuperman, Josh; Dale, Anders
2010-01-01
Artifacts caused by patient motion during scanning remain a serious problem in most MRI applications. The prospective motion correction technique attempts to address this problem at its source by keeping the measurement coordinate system fixed with respect to the patient throughout the entire scan process. In this study, a new image-based approach for prospective motion correction is described, which utilizes three orthogonal 2D spiral navigator acquisitions (SP-Navs) along with a flexible image-based tracking method based on the Extended Kalman Filter (EKF) algorithm for online motion measurement. The SP-Nav/EKF framework offers the advantages of image-domain tracking within patient-specific regions-of-interest and reduced sensitivity to off-resonance-induced corruption of rigid-body motion estimates. The performance of the method was tested using offline computer simulations and online in vivo head motion experiments. In vivo validation results covering a broad range of staged head motions indicate a steady-state error of the SP-Nav/EKF motion estimates of less than 10 % of the motion magnitude, even for large compound motions that included rotations over 15 degrees. A preliminary in vivo application in 3D inversion recovery spoiled gradient echo (IR-SPGR) and 3D fast spin echo (FSE) sequences demonstrates the effectiveness of the SP-Nav/EKF framework for correcting 3D rigid-body head motion artifacts prospectively in high-resolution 3D MRI scans. PMID:20027635
Improving Terminology Mapping in Clinical Text with Context-Sensitive Spelling Correction.
Dziadek, Juliusz; Henriksson, Aron; Duneld, Martin
2017-01-01
The mapping of unstructured clinical text to an ontology facilitates meaningful secondary use of health records but is non-trivial due to lexical variation and the abundance of misspellings in hurriedly produced notes. Here, we apply several spelling correction methods to Swedish medical text and evaluate their impact on SNOMED CT mapping; first in a controlled evaluation using medical literature text with induced errors, followed by a partial evaluation on clinical notes. It is shown that the best-performing method is context-sensitive, taking into account trigram frequencies and utilizing a corpus-based dictionary.
X-ray fluorescence holography studies for a Cu3Au crystal
NASA Astrophysics Data System (ADS)
Dąbrowski, K. M.; Dul, D. T.; Jaworska-Gołąb, T.; Rysz, J.; Korecki, P.
2015-12-01
In this work we show that performing a numerical correction for beam attenuation and indirect excitation allows one to fully restore element sensitivity in the three-dimensional reconstruction of the atomic structure. This is exemplified by a comparison of atomic images reconstructed from holograms measured for ordered and disordered phases of a Cu3Au crystal that clearly show sensitivity to changes in occupancy of the atomic sites. Moreover, the numerical correction, which is based on quantitative methods of X-ray fluorescence spectroscopy, was extended to take into account the influence of a disturbed overlayer in the sample.
Auditory feedback blocks memory benefits of cueing during sleep
Schreiner, Thomas; Lehmann, Mick; Rasch, Björn
2015-01-01
It is now widely accepted that re-exposure to memory cues during sleep reactivates memories and can improve later recall. However, the underlying mechanisms are still unknown. As reactivation during wakefulness renders memories sensitive to updating, it remains an intriguing question whether reactivated memories during sleep also become susceptible to incorporating further information after the cue. Here we show that the memory benefits of cueing Dutch vocabulary during sleep are in fact completely blocked when memory cues are directly followed by either correct or conflicting auditory feedback, or a pure tone. In addition, immediate (but not delayed) auditory stimulation abolishes the characteristic increases in oscillatory theta and spindle activity typically associated with successful reactivation during sleep as revealed by high-density electroencephalography. We conclude that plastic processes associated with theta and spindle oscillations occurring during a sensitive period immediately after the cue are necessary for stabilizing reactivated memory traces during sleep. PMID:26507814
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, Nathaniel; Gu, Jiayin; Liu, Zhen
Here, we study angular observables in themore » $$ {e}^{+}{e}^{-}\\to ZH\\to {\\ell}^{+}{\\ell}^{-}b\\overline{b} $$ channel at future circular e$$^{+}$$ e$$^{-}$$ colliders such as CEPC and FCC-ee. Taking into account the impact of realistic cut acceptance and detector effects, we forecast the precision of six angular asymmetries at CEPC (FCC-ee) with center-of-mass energy $$ \\sqrt{s}=240 $$ GeV and 5 (30) ab$$^{-1}$$ integrated luminosity. We then determine the projected sensitivity to a range of operators relevant for he Higgs-strahlung process in the dimension-6 Higgs EFT. Our results show that angular observables provide complementary sensitivity to rate measurements when constraining various tensor structures arising from new physics. We further find that angular asymmetries provide a novel means of both probing BSM corrections to the HZγ coupling and constraining the “blind spot” in indirect limits on supersymmetric scalar top partners.« less
Craig, Nathaniel; Gu, Jiayin; Liu, Zhen; ...
2016-03-09
Here, we study angular observables in themore » $$ {e}^{+}{e}^{-}\\to ZH\\to {\\ell}^{+}{\\ell}^{-}b\\overline{b} $$ channel at future circular e$$^{+}$$ e$$^{-}$$ colliders such as CEPC and FCC-ee. Taking into account the impact of realistic cut acceptance and detector effects, we forecast the precision of six angular asymmetries at CEPC (FCC-ee) with center-of-mass energy $$ \\sqrt{s}=240 $$ GeV and 5 (30) ab$$^{-1}$$ integrated luminosity. We then determine the projected sensitivity to a range of operators relevant for he Higgs-strahlung process in the dimension-6 Higgs EFT. Our results show that angular observables provide complementary sensitivity to rate measurements when constraining various tensor structures arising from new physics. We further find that angular asymmetries provide a novel means of both probing BSM corrections to the HZγ coupling and constraining the “blind spot” in indirect limits on supersymmetric scalar top partners.« less
Research on ground-based LWIR hyperspectral imaging remote gas detection
NASA Astrophysics Data System (ADS)
Yang, Zhixiong; Yu, Chunchao; Zheng, Weijian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong
2015-10-01
The new progress of ground-based long-wave infrared remote sensing is presented, which describes the windowing spatial and temporal modulation Fourier spectroscopy imaging in details. The prototype forms the interference fringes based on the corner-cube of spatial modulation of Michelson interferometer, using cooled long-wave infrared photovoltaic staring FPA (focal plane array) detector. The LWIR hyperspectral imaging is achieved by the process of collection, reorganization, correction, apodization, FFT etc. from data cube. Noise equivalent sensor response (NESR), which is the sensitivity index of CHIPED-1 LWIR hyperspectral imaging prototype, can reach 5.6×10-8W/(cm-1.sr.cm2) at single sampling. Hyperspectral imaging is used in the field of organic gas VOC infrared detection. Relative to wide band infrared imaging, it has some advantages. Such as, it has high sensitivity, the strong anti-interference ability, identify the variety, and so on.
NASA Astrophysics Data System (ADS)
Ade, N.; Nam, T. L.; Mhlanga, S. H.
2013-05-01
Although the near-tissue equivalence of diamond allows the direct measurement of dose for clinical applications without the need for energy-corrections, it is often cited that diamond detectors require pre-irradiation, a procedure necessary to stabilize the response or sensitivity of a diamond detector before dose measurements. In addition it has been pointed out that the relative dose measured with a diamond detector requires dose rate dependence correction and that the angular dependence of a detector could be due to its mechanical design or to the intrinsic angular sensitivity of the detection process. While the cause of instability of response has not been meticulously investigated, the issue of dose rate dependence correction is uncertain as some studies ignored it but reported good results. The aims of this study were therefore to investigate, in particular (1) the major cause of the unstable response of diamond detectors requiring pre-irradiation; (2) the influence of dose rate dependence correction in relative dose measurements; and (3) the angular dependence of the diamond detectors. The study was conducted with low-energy X-rays and electron therapy beams on HPHT and CVD synthesized diamonds. Ionization chambers were used for comparative measurements. Through systematic investigations, the major cause of the unstable response of diamond detectors requiring the recommended pre-irradiation step was isolated and attributed to the presence and effects of ambient light. The variation in detector's response between measurements in light and dark conditions could be as high as 63% for a CVD diamond. Dose rate dependence parameters (Δ values) of 0.950 and 1.035 were found for the HPHT and CVD diamond detectors, respectively. Without corrections based on dose rate dependence, the relative differences between depth-doses measured with the diamond detectors and a Markus chamber for exposures to 7 and 14 MeV electron beams were within 2.5%. A dose rate dependence correction using the Δ values obtained seemed to worsen the performance of the HPHT sample (up to about 3.3%) but it had a marginal effect on the performance of the CVD sample. In addition, the angular response of the CVD diamond detector was shown to be comparable with that of a cylindrical chamber. This study concludes that once the responses of the diamond detectors have been stabilised and they are properly shielded from ambient light, pre-irradiation prior to each measurement is not required. Also, the relative dose measured with the diamond detectors do not require dose rate dependence corrections as the required correction is only marginal and could have no dosimetric significance.
Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.
2017-01-01
Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579
Post - SM4 Flux Calibration of the STIS Echelle Modes
NASA Astrophysics Data System (ADS)
Bostroem, Azalee; Aloisi, A.; Bohlin, R. C.; Proffitt, C. R.; Osten, R. A.; Lennon, D.
2010-07-01
Like all STIS spectroscopic modes, STIS echelle modes show a wavelength dependent decline in detector sensitivity with time. The echelle sensitivity is further affected by a time-dependent shift in the blaze function. To better correct the effects of the echelle sensitivity loss and the blaze function changes, we derive new baselines for echelle sensitivities from post-HST Servicing Mission 4 observations of the standard star G191-B2B. We present how these baseline sensitivities compare to pre-failure trends.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Computed Tomography Window Blending: Feasibility in Thoracic Trauma.
Mandell, Jacob C; Wortman, Jeremy R; Rocha, Tatiana C; Folio, Les R; Andriole, Katherine P; Khurana, Bharti
2018-02-07
This study aims to demonstrate the feasibility of processing computed tomography (CT) images with a custom window blending algorithm that combines soft-tissue, bone, and lung window settings into a single image; to compare the time for interpretation of chest CT for thoracic trauma with window blending and conventional window settings; and to assess diagnostic performance of both techniques. Adobe Photoshop was scripted to process axial DICOM images from retrospective contrast-enhanced chest CTs performed for trauma with a window-blending algorithm. Two emergency radiologists independently interpreted the axial images from 103 chest CTs with both blended and conventional windows. Interpretation time and diagnostic performance were compared with Wilcoxon signed-rank test and McNemar test, respectively. Agreement with Nexus CT Chest injury severity was assessed with the weighted kappa statistic. A total of 13,295 images were processed without error. Interpretation was faster with window blending, resulting in a 20.3% time saving (P < .001), with no difference in diagnostic performance, within the power of the study to detect a difference in sensitivity of 5% as determined by post hoc power analysis. The sensitivity of the window-blended cases was 82.7%, compared to 81.6% for conventional windows. The specificity of the window-blended cases was 93.1%, compared to 90.5% for conventional windows. All injuries of major clinical significance (per Nexus CT Chest criteria) were correctly identified in all reading sessions, and all negative cases were correctly classified. All readers demonstrated near-perfect agreement with injury severity classification with both window settings. In this pilot study utilizing retrospective data, window blending allows faster preliminary interpretation of axial chest CT performed for trauma, with no significant difference in diagnostic performance compared to conventional window settings. Future studies would be required to assess the utility of window blending in clinical practice. Copyright © 2018 The Association of University Radiologists. All rights reserved.
Using bivariate signal analysis to characterize the epileptic focus: the benefit of surrogates.
Andrzejak, R G; Chicharro, D; Lehnertz, K; Mormann, F
2011-04-01
The disease epilepsy is related to hypersynchronous activity of networks of neurons. While acute epileptic seizures are the most extreme manifestation of this hypersynchronous activity, an elevated level of interdependence of neuronal dynamics is thought to persist also during the seizure-free interval. In multichannel recordings from brain areas involved in the epileptic process, this interdependence can be reflected in an increased linear cross correlation but also in signal properties of higher order. Bivariate time series analysis comprises a variety of approaches, each with different degrees of sensitivity and specificity for interdependencies reflected in lower- or higher-order properties of pairs of simultaneously recorded signals. Here we investigate which approach is best suited to detect putatively elevated interdependence levels in signals recorded from brain areas involved in the epileptic process. For this purpose, we use the linear cross correlation that is sensitive to lower-order signatures of interdependence, a nonlinear interdependence measure that integrates both lower- and higher-order properties, and a surrogate-corrected nonlinear interdependence measure that aims to specifically characterize higher-order properties. We analyze intracranial electroencephalographic recordings of the seizure-free interval from 29 patients with an epileptic focus located in the medial temporal lobe. Our results show that all three approaches detect higher levels of interdependence for signals recorded from the brain hemisphere containing the epileptic focus as compared to signals recorded from the opposite hemisphere. For the linear cross correlation, however, these differences are not significant. For the nonlinear interdependence measure, results are significant but only of moderate accuracy with regard to the discriminative power for the focal and nonfocal hemispheres. The highest significance and accuracy is obtained for the surrogate-corrected nonlinear interdependence measure.
Using bivariate signal analysis to characterize the epileptic focus: The benefit of surrogates
NASA Astrophysics Data System (ADS)
Andrzejak, R. G.; Chicharro, D.; Lehnertz, K.; Mormann, F.
2011-04-01
The disease epilepsy is related to hypersynchronous activity of networks of neurons. While acute epileptic seizures are the most extreme manifestation of this hypersynchronous activity, an elevated level of interdependence of neuronal dynamics is thought to persist also during the seizure-free interval. In multichannel recordings from brain areas involved in the epileptic process, this interdependence can be reflected in an increased linear cross correlation but also in signal properties of higher order. Bivariate time series analysis comprises a variety of approaches, each with different degrees of sensitivity and specificity for interdependencies reflected in lower- or higher-order properties of pairs of simultaneously recorded signals. Here we investigate which approach is best suited to detect putatively elevated interdependence levels in signals recorded from brain areas involved in the epileptic process. For this purpose, we use the linear cross correlation that is sensitive to lower-order signatures of interdependence, a nonlinear interdependence measure that integrates both lower- and higher-order properties, and a surrogate-corrected nonlinear interdependence measure that aims to specifically characterize higher-order properties. We analyze intracranial electroencephalographic recordings of the seizure-free interval from 29 patients with an epileptic focus located in the medial temporal lobe. Our results show that all three approaches detect higher levels of interdependence for signals recorded from the brain hemisphere containing the epileptic focus as compared to signals recorded from the opposite hemisphere. For the linear cross correlation, however, these differences are not significant. For the nonlinear interdependence measure, results are significant but only of moderate accuracy with regard to the discriminative power for the focal and nonfocal hemispheres. The highest significance and accuracy is obtained for the surrogate-corrected nonlinear interdependence measure.
Global trends in ocean phytoplankton: a new assessment using revised ocean colour data.
Gregg, Watson W; Rousseaux, Cécile S; Franz, Bryan A
2017-01-01
A recent revision of the NASA global ocean colour record shows changes in global ocean chlorophyll trends. This new 18-year time series now includes three global satellite sensors, the Sea-viewing Wide Field of view Sensor (SeaWiFS), Moderate Resolution Imaging Spectroradiometer (MODIS-Aqua), and Visible Infrared Imaging Radiometer Suite (VIIRS). The major changes are radiometric drift correction, a new algorithm for chlorophyll, and a new sensor VIIRS. The new satellite data record shows no significant trend in global annual median chlorophyll from 1998 to 2015, in contrast to a statistically significant negative trend from 1998 to 2012 in the previous version. When revised satellite data are assimilated into a global ocean biogeochemical model, no trend is observed in global annual median chlorophyll. This is consistent with previous findings for the 1998-2012 time period using the previous processing version and only two sensors (SeaWiFS and MODIS). Detecting trends in ocean chlorophyll with satellites is sensitive to data processing options and radiometric drift correction. The assimilation of these data, however, reduces sensitivity to algorithms and radiometry, as well as the addition of a new sensor. This suggests the assimilation model has skill in detecting trends in global ocean colour. Using the assimilation model, spatial distributions of significant trends for the 18-year record (1998-2015) show recent decadal changes. Most notable are the North and Equatorial Indian Oceans basins, which exhibit a striking decline in chlorophyll. It is exemplified by declines in diatoms and chlorophytes, which in the model are large and intermediate size phytoplankton. This decline is partially compensated by significant increases in cyanobacteria, which represent very small phytoplankton. This suggests the beginning of a shift in phytoplankton composition in these tropical and subtropical Indian basins.
NASA Astrophysics Data System (ADS)
Chen, Y.; Wang, J.; Wang, H. H.; Yang, L.; Chen, W.; Xu, Y. T.
2016-08-01
Double-fed induction generator (DFIG) is sensitive to the disturbances of grid, so the security and stability of the grid and the DFIG itself are under threat with the rapid increase of DFIG. Therefore, it is important to study dynamic response of the DFIG when voltage drop failure is happened in power system. In this paper, firstly, mathematical models and the control strategy about mechanical and electrical response processes is respectively introduced. Then through the analysis of response process, it is concluded that the dynamic response characteristics are related to voltage drop level, operating status of DFIG and control strategy adapted to rotor side. Last, the correctness of conclusion is validated by the simulation about mechanical and electrical response processes in different voltage levels drop and different DFIG output levels under DIgSILENT/PowerFactory software platform.
Satellite on-board processing for earth resources data
NASA Technical Reports Server (NTRS)
Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.
1975-01-01
Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
Towards Compensation Correctness in Interactive Systems
NASA Astrophysics Data System (ADS)
Vaz, Cátia; Ferreira, Carla
One fundamental idea of service-oriented computing is that applications should be developed by composing already available services. Due to the long running nature of service interactions, a main challenge in service composition is ensuring correctness of failure recovery. In this paper, we use a process calculus suitable for modelling long running transactions with a recovery mechanism based on compensations. Within this setting, we discuss and formally state correctness criteria for compensable processes compositions, assuming that each process is correct with respect to failure recovery. Under our theory, we formally interpret self-healing compositions, that can detect and recover from failures, as correct compositions of compensable processes.
Fabrication development for ODS-superalloy, air-cooled turbine blades
NASA Technical Reports Server (NTRS)
Moracz, D. J.
1984-01-01
MA-600 is a gamma prime and oxide dispersion strengthened superalloy made by mechanical alloying. At the initiation of this program, MA-6000 was available as an experimental alloy only and did not go into production until late in the program. The objective of this program was to develop a thermal-mechanical-processing approach which would yield the necessary elongated grain structure and desirable mechanical properties after conventional press forging. Forging evaluations were performed to select optimum thermal-mechanical-processing conditions. These forging evaluations indicated that MA-6000 was extremely sensitive to die chilling. In order to conventionally hot forge the alloy, an adherent cladding, either the original extrusion can or a thick plating, was required to prevent cracking of the workpiece. Die design must reflect the requirement of cladding. MA-6000 was found to be sensitive to the forging temperature. The correct temperature required to obtain the proper grain structure after recrystallization was found to be between 1010-1065 C (1850-1950 F). The deformation level did not affect subsequent crystallization; however, sharp transition areas in tooling designs should be avoided in forming a blade shape because of the potential for grain structure discontinuities. Starting material to be used for forging should be processed so that it is capable of being zone annealed to a coarse elongated grain structure as bar stock. This conclusion means that standard processed bar materials can be used.
NASA Astrophysics Data System (ADS)
Tsuchiya, Yuichiro; Kodera, Yoshie
2006-03-01
In the picture archiving and communication system (PACS) environment, it is important that all images be stored in the correct location. However, if information such as the patient's name or identification number has been entered incorrectly, it is difficult to notice the error. The present study was performed to develop a system of patient collation automatically for dynamic radiogram examination by a kinetic analysis, and to evaluate the performance of the system. Dynamic chest radiographs during respiration were obtained by using a modified flat panel detector system. Our computer algorithm developed in this study was consisted of two main procedures, kinetic map imaging processing, and collation processing. Kinetic map processing is a new algorithm to visualize a movement for dynamic radiography; direction classification of optical flows and intensity-density transformation technique was performed. Collation processing consisted of analysis with an artificial neural network (ANN) and discrimination for Mahalanobis' generalized distance, those procedures were performed to evaluate a similarity of combination for the same person. Finally, we investigated the performance of our system using eight healthy volunteers' radiographs. The performance was shown as a sensitivity and specificity. The sensitivity and specificity for our system were shown 100% and 100%, respectively. This result indicated that our system has excellent performance for recognition of a patient. Our system will be useful in PACS management for dynamic chest radiography.
Maury, Carl Peter J
2018-05-01
A crucial stage in the origin of life was the emergence of the first molecular entity that was able to replicate, transmit information, and evolve on the early Earth. The amyloid world hypothesis posits that in the pre-RNA era, information processing was based on catalytic amyloids. The self-assembly of short peptides into β-sheet amyloid conformers leads to extraordinary structural stability and novel multifunctionality that cannot be achieved by the corresponding nonaggregated peptides. The new functions include self-replication, catalytic activities, and information transfer. The environmentally sensitive template-assisted replication cycles generate a variety of amyloid polymorphs on which evolutive forces can act, and the fibrillar assemblies can serve as scaffolds for the amyloids themselves and for ribonucleotides proteins and lipids. The role of amyloid in the putative transition process from an amyloid world to an amyloid-RNA-protein world is not limited to scaffolding and protection: the interactions between amyloid, RNA, and protein are both complex and cooperative, and the amyloid assemblages can function as protometabolic entities catalyzing the formation of simple metabolite precursors. The emergence of a pristine amyloid-based in-put sensitive, chiroselective, and error correcting information-processing system, and the evolvement of mutualistic networks were, arguably, of essential importance in the dynamic processes that led to increased complexity, organization, compartmentalization, and, eventually, the origin of life.
Calibration of gyro G-sensitivity coefficients with FOG monitoring on precision centrifuge
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Yang, Yanqiang; Li, Baoguo; Liu, Ming
2017-07-01
The advantages of mechanical gyros, such as high precision, endurance and reliability, make them widely used as the core parts of inertial navigation systems (INS) utilized in the fields of aeronautics, astronautics and underground exploration. In a high-g environment, the accuracy of gyros is degraded. Therefore, the calibration and compensation of the gyro G-sensitivity coefficients is essential when the INS operates in a high-g environment. A precision centrifuge with a counter-rotating platform is the typical equipment for calibrating the gyro, as it can generate large centripetal acceleration and keep the angular rate close to zero; however, its performance is seriously restricted by the angular perturbation in the high-speed rotating process. To reduce the dependence on the precision of the centrifuge and counter-rotating platform, an effective calibration method for the gyro g-sensitivity coefficients under fiber-optic gyroscope (FOG) monitoring is proposed herein. The FOG can efficiently compensate spindle error and improve the anti-interference ability. Harmonic analysis is performed for data processing. Simulations show that the gyro G-sensitivity coefficients can be efficiently estimated to up to 99% of the true value and compensated using a lookup table or fitting method. Repeated tests indicate that the G-sensitivity coefficients can be correctly calibrated when the angular rate accuracy of the precision centrifuge is as low as 0.01%. Verification tests are performed to demonstrate that the attitude errors can be decreased from 0.36° to 0.08° in 200 s. The proposed measuring technology is generally applicable in engineering, as it can reduce the accuracy requirements for the centrifuge and the environment.
Pedrotti, Emilio; Mastropasqua, Rodolfo; Bonetto, Jacopo; Demasi, Christian; Aiello, Francesco; Nucci, Carlo; Mariotti, Cesare; Marchini, Giorgio
2017-07-17
The aim of the current study was to compare the quality of vision, contrast sensitivity and patient satisfaction with a biaspheric, segmented, rotationally asymmetric IOL (Lentis Comfort LS-313 MF 15-Oculentis GmbH, Berlin, Germany) as opposed to those of a monofocal IOL. This prospective single-blind comparative study included two groups of patients affected by bilateral senile cataract who underwent lens extraction and IOL implantation. The first group received a bilateral implantation of a monofocal IOL, and the second group received a bilateral implantation of the Comfort IOL. Twelve months after surgery uncorrected and corrected visual acuity at different distances (30, 50, 70 cm and 4 m), defocus curve and contrast sensitivity were assessed. Patient's satisfaction and spectacle independence were evaluated by mean of the NEI RQL-42 questionnaire. No significant differences were found between the groups in terms of near vision. The group of patients implanted with a Comfort IOL obtained the best results at intermediate distances (50 and 70 cm P < .001). Both groups showed an excellent uncorrected distance visual acuity (4 m). No statistically significant differences were found in terms of corrected near, intermediate and distance visual acuity. Concerning contrast sensitivity, no statistically significant differences between the groups were observed at any cycles per degree. The NEI RQL-42 questionnaire showed statistically significant differences between the group for "near vision" (P = .015), "dependence on correction" (P = .048) and "suboptimal correction" (P < .001) subscales. Our findings indicated that the Comfort IOL +1.5 D provides a good intermediate spectacle independence together with a high quality of vision, with a low amount of subjective symptoms and a contrast sensitivity similar to those obtained with a monofocal IOL.
Ahmed, Lubna
2018-03-01
The ability to correctly interpret facial expressions is key to effective social interactions. People are well rehearsed and generally very efficient at correctly categorizing expressions. However, does their ability to do so depend on how cognitively loaded they are at the time? Using repeated-measures designs, we assessed the sensitivity of facial expression categorization to cognitive resources availability by measuring people's expression categorization performance during concurrent low and high cognitive load situations. In Experiment1, participants categorized the 6 basic upright facial expressions in a 6-automated-facial-coding response paradigm while maintaining low or high loading information in working memory (N = 40; 60 observations per load condition). In Experiment 2, they did so for both upright and inverted faces (N = 46; 60 observations per load and inversion condition). In both experiments, expression categorization for upright faces was worse during high versus low load. Categorization rates actually improved with increased load for the inverted faces. The opposing effects of cognitive load on upright and inverted expressions are explained in terms of a cognitive load-related dispersion in the attentional window. Overall, the findings support that expression categorization is sensitive to cognitive resources availability and moreover suggest that, in this paradigm, it is the perceptual processing stage of expression categorization that is affected by cognitive load. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Pharmacological rescue of trafficking-impaired ATP-sensitive potassium channels
Martin, Gregory M.; Chen, Pei-Chun; Devaraneni, Prasanna; Shyng, Show-Ling
2013-01-01
ATP-sensitive potassium (KATP) channels link cell metabolism to membrane excitability and are involved in a wide range of physiological processes including hormone secretion, control of vascular tone, and protection of cardiac and neuronal cells against ischemic injuries. In pancreatic β-cells, KATP channels play a key role in glucose-stimulated insulin secretion, and gain or loss of channel function results in neonatal diabetes or congenital hyperinsulinism, respectively. The β-cell KATP channel is formed by co-assembly of four Kir6.2 inwardly rectifying potassium channel subunits encoded by KCNJ11 and four sulfonylurea receptor 1 subunits encoded by ABCC8. Many mutations in ABCC8 or KCNJ11 cause loss of channel function, thus, congenital hyperinsulinism by hampering channel biogenesis and hence trafficking to the cell surface. The trafficking defects caused by a subset of these mutations can be corrected by sulfonylureas, KATP channel antagonists that have long been used to treat type 2 diabetes. More recently, carbamazepine, an anticonvulsant that is thought to target primarily voltage-gated sodium channels has been shown to correct KATP channel trafficking defects. This article reviews studies to date aimed at understanding the mechanisms by which mutations impair channel biogenesis and trafficking and the mechanisms by which pharmacological ligands overcome channel trafficking defects. Insight into channel structure-function relationships and therapeutic implications from these studies are discussed. PMID:24399968
NASA Astrophysics Data System (ADS)
Zhou, Xiaohu; Neubauer, Franz; Zhao, Dong; Xu, Shichao
2015-01-01
The high-precision geometric correction of airborne hyperspectral remote sensing image processing was a hard nut to crack, and conventional methods of remote sensing image processing by selecting ground control points to correct the images are not suitable in the correction process of airborne hyperspectral image. The optical scanning system of an inertial measurement unit combined with differential global positioning system (IMU/DGPS) is introduced to correct the synchronous scanned Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing images. Posture parameters, which were synchronized with the OMIS II, were first obtained from the IMU/DGPS. Second, coordinate conversion and flight attitude parameters' calculations were conducted. Third, according to the imaging principle of OMIS II, mathematical correction was applied and the corrected image pixels were resampled. Then, better image processing results were achieved.
The neural basis of monitoring goal progress
Benn, Yael; Webb, Thomas L.; Chang, Betty P. I.; Sun, Yu-Hsuan; Wilkinson, Iain D.; Farrow, Tom F. D.
2014-01-01
The neural basis of progress monitoring has received relatively little attention compared to other sub-processes that are involved in goal directed behavior such as motor control and response inhibition. Studies of error-monitoring have identified the dorsal anterior cingulate cortex (dACC) as a structure that is sensitive to conflict detection, and triggers corrective action. However, monitoring goal progress involves monitoring correct as well as erroneous events over a period of time. In the present research, 20 healthy participants underwent functional magnetic resonance imagining (fMRI) while playing a game that involved monitoring progress toward either a numerical or a visuo-spatial target. The findings confirmed the role of the dACC in detecting situations in which the current state may conflict with the desired state, but also revealed activations in the frontal and parietal regions, pointing to the involvement of processes such as attention and working memory (WM) in monitoring progress over time. In addition, activation of the cuneus was associated with monitoring progress toward a specific target presented in the visual modality. This is the first time that activation in this region has been linked to higher-order processing of goal-relevant information, rather than low-level anticipation of visual stimuli. Taken together, these findings identify the neural substrates involved in monitoring progress over time, and how these extend beyond activations observed in conflict and error monitoring. PMID:25309380
AI and workflow automation: The prototype electronic purchase request system
NASA Technical Reports Server (NTRS)
Compton, Michael M.; Wolfe, Shawn R.
1994-01-01
Automating 'paper' workflow processes with electronic forms and email can dramatically improve the efficiency of those processes. However, applications that involve complex forms that are used for a variety of purposes or that require numerous and varied approvals often require additional software tools to ensure that (1) the electronic form is correctly and completely filled out, and (2) the form is routed to the proper individuals and organizations for approval. The prototype electronic purchase request (PEPR) system, which has been in pilot use at NASA Ames Research Center since December 1993, seamlessly links a commercial electronics forms package and a CLIPS-based knowledge system that first ensures that electronic forms are correct and complete, and then generates an 'electronic routing slip' that is used to route the form to the people who must sign it. The PEPR validation module is context-sensitive, and can apply different validation rules at each step in the approval process. The PEPR system is form-independent, and has been applied to several different types of forms. The system employs a version of CLIPS that has been extended to support AppleScript, a recently-released scripting language for the Macintosh. This 'scriptability' provides both a transparent, flexible interface between the two programs and a means by which a single copy of the knowledge base can be utilized by numerous remote users.
Han, Zheng; Wu, Xiaohui; Roelle, Sarah; Chen, Chuheng; Schiemann, William P; Lu, Zheng-Rong
2018-01-08
In the original version of this Article, the penultimate sentence of the Abstract incorrectly read 'The dose of the contrast agent for effective molecular MRI is only slightly lower than that of ZD2-Cy5.5 (0.5 µmol kg -1 ) in fluorescence imaging.' The correct version states 'higher' in place of 'lower'. This error has been corrected in both the PDF and HTML versions of the Article.
NASA Astrophysics Data System (ADS)
Jalabert, Eva; Mercier, Flavien
2018-07-01
DORIS measurements rely on the precise knowledge of the embedded oscillator which is called the Ultra Stable Oscillator (DORIS USO). The important radiations in the South Atlantic Anomaly (SAA) perturb the USO behavior by causing rapid frequency variations when the satellite is flying through the SAA. These variations are not taken into account in standard DORIS processing, since the USO is modelled as a third degree polynomial over 7-10 days. Therefore, there are systematic measurements errors when the satellite passes through SAA. In standard GNSS processing, the clock is directly estimated at each epoch. On Sentinel-3A, the GPS receiver and the DORIS receiver use the same USO. It is thus possible to estimate the behavior of the USO using GPS measurements. This estimated USO behavior can be used in the DORIS processing, instead of the third degree polynomial, hence allowing an estimation of the orbit sensitivity to these USO anomalies. This study shows two main results. First, the SAA effect on the DORIS USO is observed well using GPS measurements. Second, the USO behavior observed with GPS can be used to mitigate the SAA effect. Indeed, when used in Sentinel-3A processing, the resulting DORIS orbit shows improved phase measurements and station positioning for stations inside the SAA (Arequipa and Cachoeira). The phase measurements residuals are improved by up to 10 cm, and station vertical positioning (i.e. on the estimated Up component in the North-East-Up station frame) is improved by up to a few centimeters. However, the orbit itself is not sensitive to the correction because only two stations (out of almost 60) are SAA-sensitive on Sentinel-3A.
Windmann, Sabine; Hill, Holger
2014-10-01
Performance on tasks requiring discrimination of at least two stimuli can be viewed either from an objective perspective (referring to actual stimulus differences), or from a subjective perspective (corresponding to participant's responses). Using event-related potentials recorded during an old/new recognition memory test involving emotionally laden and neutral words studied either blockwise or randomly intermixed, we show here how the objective perspective (old versus new items) yields late effects of blockwise emotional item presentation at parietal sites that the subjective perspective fails to find, whereas the subjective perspective ("old" versus "new" responses) is more sensitive to early effects of emotion at anterior sites than the objective perspective. Our results demonstrate the potential advantage of dissociating the subjective and the objective perspective onto task performance (in addition to analyzing trials with correct responses), especially for investigations of illusions and information processing biases, in behavioral and cognitive neuroscience studies. Copyright © 2014 Elsevier Inc. All rights reserved.
A model of the 0.4-GHz scatterometer. [used for agriculture soil moisture program
NASA Technical Reports Server (NTRS)
Wu, S. T.
1978-01-01
The 0.4 GHz aircraft scatterometer system used for the agricultural soil moisture estimation program is analyzed for the antenna pattern, the signal flow in the receiver data channels, and the errors in the signal outputs. The operational principal, system sensitivity, data handling, and resolution cell length requirements are also described. The backscattering characteristics of the agriculture scenes are contained in the form of the functional dependence of the backscattering coefficient on the incidence angle. The substantial gains of the cross-polarization term of the horizontal and vertical antennas have profound effects on the cross-polarized backscattered signals. If these signals are not corrected properly, large errors could result in the estimate of the cross-polarized backscattering coefficient. It is also necessary to correct the variations of the aircraft parameters during data processing to minimize the error in the 0 degree estimation. Recommendations are made to improve the overall performance of the scatterometer system.
A multihospital medication allergy audit: a means to quality assurance.
Hoffmann, R P; Ellerbrock, M C; Lovett, J E
1982-04-01
Seventeen community hospitals within the 16 division of the Sisters of Mercy Health Corporation cooperatively participated in a medication allergy audit program. Initial and follow-up audits were conducted at each hospital to determine whether allergy information for penicillin- or aspirin-sensitive patients was appropriately communicated to the pharmacist. A total of 483 patient records were reviewed during each audit which corresponded to 12% of each hospital's average patient census. In the initial audit, the overall acceptance rate for the combined hospitals was 62.3%. Following the first audit, each hospital undertook corrective follow-up measures in an attempt to improve its results. In the second audit, the overall acceptance rate improved significantly to 78.9%. It is concluded that this auditing process followed by corrective follow-up measures was an effective mechanism for improving the communication of patient allergy information and is a means to quality assurance. Future audits will be necessary to determine whether the beneficial effects produced will be sustained or improved.
Method and system for providing work machine multi-functional user interface
Hoff, Brian D [Peoria, IL; Akasam, Sivaprasad [Peoria, IL; Baker, Thomas M [Peoria, IL
2007-07-10
A method is performed to provide a multi-functional user interface on a work machine for displaying suggested corrective action. The process includes receiving status information associated with the work machine and analyzing the status information to determine an abnormal condition. The process also includes displaying a warning message on the display device indicating the abnormal condition and determining one or more corrective actions to handle the abnormal condition. Further, the process includes determining an appropriate corrective action among the one or more corrective actions and displaying a recommendation message on the display device reflecting the appropriate corrective action. The process may also include displaying a list including the remaining one or more corrective actions on the display device to provide alternative actions to an operator.
Towards process-informed bias correction of climate change simulations
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Shepherd, Theodore G.; Widmann, Martin; Zappa, Giuseppe; Walton, Daniel; Gutiérrez, José M.; Hagemann, Stefan; Richter, Ingo; Soares, Pedro M. M.; Hall, Alex; Mearns, Linda O.
2017-11-01
Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.
LANDSAT-D data format control book. Volume 6: (Products)
NASA Technical Reports Server (NTRS)
Kabat, F.
1981-01-01
Four basic product types are generated from the raw thematic mapper (TM) and multispectral scanner (MSS) payload data by the NASA GSFC LANDSAT 4 data management system: (1) unprocessed data (raw sensor data); (2) partially processed data, which consists of radiometrically corrected sensor data with geometric correction information appended; (3) fully processed data, which consists of radiometrically and geometrically corrected sensor data; and (4) inventory data which consists of summary information about product types 2 and 3. High density digital recorder formatting and the radiometric correction process are described. Geometric correction information is included.
Real-Time Microfluidic Blood-Counting System for PET and SPECT Preclinical Pharmacokinetic Studies.
Convert, Laurence; Lebel, Réjean; Gascon, Suzanne; Fontaine, Réjean; Pratte, Jean-François; Charette, Paul; Aimez, Vincent; Lecomte, Roger
2016-09-01
Small-animal nuclear imaging modalities have become essential tools in the development process of new drugs, diagnostic procedures, and therapies. Quantification of metabolic or physiologic parameters is based on pharmacokinetic modeling of radiotracer biodistribution, which requires the blood input function in addition to tissue images. Such measurements are challenging in small animals because of their small blood volume. In this work, we propose a microfluidic counting system to monitor rodent blood radioactivity in real time, with high efficiency and small detection volume (∼1 μL). A microfluidic channel is built directly above unpackaged p-i-n photodiodes to detect β-particles with maximum efficiency. The device is embedded in a compact system comprising dedicated electronics, shielding, and pumping unit controlled by custom firmware to enable measurements next to small-animal scanners. Data corrections required to use the input function in pharmacokinetic models were established using calibrated solutions of the most common PET and SPECT radiotracers. Sensitivity, dead time, propagation delay, dispersion, background sensitivity, and the effect of sample temperature were characterized. The system was tested for pharmacokinetic studies in mice by quantifying myocardial perfusion and oxygen consumption with (11)C-acetate (PET) and by measuring the arterial input function using (99m)TcO4 (-) (SPECT). Sensitivity for PET isotopes reached 20%-47%, a 2- to 10-fold improvement relative to conventional catheter-based geometries. Furthermore, the system detected (99m)Tc-based SPECT tracers with an efficiency of 4%, an outcome not possible through a catheter. Correction for dead time was found to be unnecessary for small-animal experiments, whereas propagation delay and dispersion within the microfluidic channel were accurately corrected. Background activity and sample temperature were shown to have no influence on measurements. Finally, the system was successfully used in animal studies. A fully operational microfluidic blood-counting system for preclinical pharmacokinetic studies was developed. Microfluidics enabled reliable and high-efficiency measurement of the blood concentration of most common PET and SPECT radiotracers with high temporal resolution in small blood volume. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
ERIC Educational Resources Information Center
Avery, Derek R.; Steingard, David S.
2008-01-01
Due to mounting pressures to avoid offending individuals on the basis of demographic group membership, political correctness has begun to restrict student participation in our diversity courses. This restriction diminishes what can be learned from class dialogue, an important component of diversity instruction. This article offers a model of…
Sakaguchi, Hitoshi; Miyazawa, Masaaki; Yoshida, Yukiko; Ito, Yuichi; Suzuki, Hiroyuki
2007-02-01
Preservatives are important components in many products, but have a history of purported allergy. Several assays [e.g., guinea pig maximization test (GPMT), local lymph node assay (LLNA)] are used to evaluate allergy potential of preservatives. We recently developed the human Cell Line Activation Test (h-CLAT), an in vitro skin sensitization test using human THP-1 cells. This test evaluates the augmentation of CD86 and CD54 expression, which are key events in the sensitization process, as an indicator of allergy following treatment with test chemical. Earlier, we found that a sub-toxic concentration was needed for the up-regulation of surface marker expression. In this study, we further evaluate the capability of h-CLAT to predict allergy potential using eight preservatives. Cytotoxicity was determined using propidium iodide with flow cytometry analysis and five doses that produce a 95, 85, 75, 65, and 50% cell viability were selected. If a material did not have any cytotoxicity at the highest technical dose (HTD), five doses are set using serial 1.3 dilutions of the HTD. The test materials used were six known allergic preservatives (e.g., methylchloroisothiazolinone/methylisothiazolinone, formaldehyde), and two non-allergic preservatives (methylparaben and 4-hydroxybenzoic acid). All allergic preservatives augmented CD86 and/or CD54 expression, indicating h-CLAT correctly identified the allergens. No augmentation was observed with the non-allergic preservatives; also correctly identified by h-CLAT. In addition, we report two threshold concentrations that may be used to categorize skin sensitization potency like the LLNA estimated concentration that yield a three-fold stimulation (EC3) value. These corresponding values are the estimated concentration which gives a relative fluorescence intensity (RFI) = 150 for CD86 and an RFI = 200 for CD54. These data suggest that h-CLAT, using THP-1 cells, may be able to predict the allergy potential of preservatives and possibility classify the potency of an allergen.
Ohnuma, Kazuhiko; Kayanuma, Hiroyuki; Lawu, Tjundewo; Negishi, Kazuno; Yamaguchi, Takefumi; Noda, Toru
2011-01-01
Correcting spherical and chromatic aberrations in vitro in human eyes provides substantial visual acuity and contrast sensitivity improvements. We found the same improvement in the retinal images using a model eye with/without correction of longitudinal chromatic aberrations (LCAs) and spherical aberrations (SAs). The model eye included an intraocular lens (IOL) and artificial cornea with human ocular LCAs and average human SAs. The optotypes were illuminated using a D65 light source, and the images were obtained using two-dimensional luminance colorimeter. The contrast improvement from the SA correction was higher than the LCA correction, indicating the benefit of an aspheric achromatic IOL. PMID:21698008
Quantitative mass spectrometry methods for pharmaceutical analysis
Loos, Glenn; Van Schepdael, Ann
2016-01-01
Quantitative pharmaceutical analysis is nowadays frequently executed using mass spectrometry. Electrospray ionization coupled to a (hybrid) triple quadrupole mass spectrometer is generally used in combination with solid-phase extraction and liquid chromatography. Furthermore, isotopically labelled standards are often used to correct for ion suppression. The challenges in producing sensitive but reliable quantitative data depend on the instrumentation, sample preparation and hyphenated techniques. In this contribution, different approaches to enhance the ionization efficiencies using modified source geometries and improved ion guidance are provided. Furthermore, possibilities to minimize, assess and correct for matrix interferences caused by co-eluting substances are described. With the focus on pharmaceuticals in the environment and bioanalysis, different separation techniques, trends in liquid chromatography and sample preparation methods to minimize matrix effects and increase sensitivity are discussed. Although highly sensitive methods are generally aimed for to provide automated multi-residue analysis, (less sensitive) miniaturized set-ups have a great potential due to their ability for in-field usage. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644982
Martin-Esteban, A; Slowikowski, B; Grobecker, K H
2004-06-17
Solid sampling-electrothermal vaporisation-inductively coupled plasma-mass spectrometry (SS-ETV-ICP-MS) is an attractive technique for the direct simultaneous determination of trace elements in solid samples and especially in long-term studies (i.e. assessment of the homogeneity of reference materials). However, during these studies a downward drift in the instrument sensitivity has been observed due likely to deposits on the sampling and skimmer cones and on the ion lens of the mass spectrometer. Accordingly, in this paper, several means of correcting and/or suppressing sensitivity drift are proposed and evaluated for the monitoring of Cd, Cu, Hg, Mn, Pb, Sb, Se, Sn, Tl, U and V in different reference materials of inorganic and organic (biological) origin. From that studies, the combination of the use of the argon dimer as internal standard together with a modification in the ETV-ICP connection tube seems to be the best mean of getting stable sensitivity during at least 60 consecutive ETV runs.
Actinic Flux Calculations: A Model Sensitivity Study
NASA Technical Reports Server (NTRS)
Krotkov, Nickolay A.; Flittner, D.; Ahmad, Z.; Herman, J. R.; Einaudi, Franco (Technical Monitor)
2000-01-01
calculate direct and diffuse surface irradiance and actinic flux (downwelling (2p) and total (4p)) for the reference model. Sensitivity analysis has shown that the accuracy of the radiative transfer flux calculations for a unit ETS (i.e. atmospheric transmittance) together with a numerical interpolation technique for the constituents' vertical profiles is better than 1% for SZA less than 70(sub o) and wavelengths longer than 310 nm. The differences increase for shorter wavelengths and larger SZA, due to the differences in pseudo-spherical correction techniques and vertical discretetization among the codes. Our sensitivity study includes variation of ozone cross-sections, ETS spectra and the effects of wavelength shifts between vacuum and air scales. We also investigate the effects of aerosols on the spectral flux components in the UV and visible spectral regions. The "aerosol correction factors" (ACFs) were calculated at discrete wavelengths and different SZAs for each flux component (direct, diffuse, reflected) and prescribed IPMMI aerosol parameters. Finally, the sensitivity study was extended to calculation of selected photolysis rates coefficients.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
Quantum Error Correction for Metrology
NASA Astrophysics Data System (ADS)
Sushkov, Alex; Kessler, Eric; Lovchinsky, Igor; Lukin, Mikhail
2014-05-01
The question of the best achievable sensitivity in a quantum measurement is of great experimental relevance, and has seen a lot of attention in recent years. Recent studies [e.g., Nat. Phys. 7, 406 (2011), Nat. Comms. 3, 1063 (2012)] suggest that in most generic scenarios any potential quantum gain (e.g. through the use of entangled states) vanishes in the presence of environmental noise. To overcome these limitations, we propose and analyze a new approach to improve quantum metrology based on quantum error correction (QEC). We identify the conditions under which QEC allows one to improve the signal-to-noise ratio in quantum-limited measurements, and we demonstrate that it enables, in certain situations, Heisenberg-limited sensitivity. We discuss specific applications to nanoscale sensing using nitrogen-vacancy centers in diamond in which QEC can significantly improve the measurement sensitivity and bandwidth under realistic experimental conditions.
Position sensitivity in large spectroscopic LaBr3:Ce crystals for Doppler broadening correction
NASA Astrophysics Data System (ADS)
Blasi, N.; Giaz, A.; Boiano, C.; Brambilla, S.; Camera, F.; Million, B.; Riboldi, S.
2016-12-01
The position sensitivity of a large LaBr3:Ce crystal was investigated with the aim of correcting for the Doppler broadening in nuclear physics experiments. The crystal was cylindrical, 3 in×3 in (7.62 cm x 7.62 cm) and with diffusive surfaces as typically used in nuclear physics basic research to measure medium or high energy gamma rays (0.5 MeV
International Ultraviolet Explorer Final Archive
NASA Technical Reports Server (NTRS)
1997-01-01
CSC processed IUE images through the Final Archive Data Processing System. Raw images were obtained from both NDADS and the IUEGTC optical disk platters for processing on the Alpha cluster, and from the IUEGTC optical disk platters for DECstation processing. Input parameters were obtained from the IUE database. Backup tapes of data to send to VILSPA were routinely made on the Alpha cluster. IPC handled more than 263 requests for priority NEWSIPS processing during the contract. Staff members also answered various questions and requests for information and sent copies of IUE documents to requesters. CSC implemented new processing capabilities into the NEWSIPS processing systems as they became available. In addition, steps were taken to improve efficiency and throughput whenever possible. The node TORTE was reconfigured as the I/O server for Alpha processing in May. The number of Alpha nodes used for the NEWSIPS processing queue was increased to a maximum of six in measured fashion in order to understand the dependence of throughput on the number of nodes and to be able to recognize when a point of diminishing returns was reached. With Project approval, generation of the VD FITS files was dropped in July. This action not only saved processing time but, even more significantly, also reduced the archive storage media requirements, and the time required to perform the archiving, drastically. The throughput of images verified through CDIVS and processed through NEWSIPS for the contract period is summarized below. The number of images of a given dispersion type and camera that were processed in any given month reflects several factors, including the availability of the required NEWSIPS software system, the availability of the corresponding required calibrations (e.g., the LWR high-dispersion ripple correction and absolute calibration), and the occurrence of reprocessing efforts such as that conducted to incorporate the updated SWP sensitivity-degradation correction in May.
Cabrera, Daniel; Thomas, Jonathan F; Wiswell, Jeffrey L; Walston, James M; Anderson, Joel R; Hess, Erik P; Bellolio, M Fernanda
2015-09-01
Current cognitive sciences describe decision-making using the dual-process theory, where a System 1 is intuitive and a System 2 decision is hypothetico-deductive. We aim to compare the performance of these systems in determining patient acuity, disposition and diagnosis. Prospective observational study of emergency physicians assessing patients in the emergency department of an academic center. Physicians were provided the patient's chief complaint and vital signs and allowed to observe the patient briefly. They were then asked to predict acuity, final disposition (home, intensive care unit (ICU), non-ICU bed) and diagnosis. A patient was classified as sick by the investigators using previously published objective criteria. We obtained 662 observations from 289 patients. For acuity, the observers had a sensitivity of 73.9% (95% CI [67.7-79.5%]), specificity 83.3% (95% CI [79.5-86.7%]), positive predictive value 70.3% (95% CI [64.1-75.9%]) and negative predictive value 85.7% (95% CI [82.0-88.9%]). For final disposition, the observers made a correct prediction in 80.8% (95% CI [76.1-85.0%]) of the cases. For ICU admission, emergency physicians had a sensitivity of 33.9% (95% CI [22.1-47.4%]) and a specificity of 96.9% (95% CI [94.0-98.7%]). The correct diagnosis was made 54% of the time with the limited data available. System 1 decision-making based on limited information had a sensitivity close to 80% for acuity and disposition prediction, but the performance was lower for predicting ICU admission and diagnosis. System 1 decision-making appears insufficient for final decisions in these domains but likely provides a cognitive framework for System 2 decision-making.
Spittle, Alicia J; Boyd, Roslyn N; Inder, Terrie E; Doyle, Lex W
2009-02-01
The objective of this study was to compare the predictive value of qualitative MRI of brain structure at term and general movements assessments at 1 and 3 months' corrected age for motor outcome at 1 year's corrected age in very preterm infants. Eighty-six very preterm infants (<30 weeks' gestation) underwent MRI at term-equivalent age, were evaluated for white matter abnormality, and had general movements assessed at 1 and 3 months' corrected age. Motor outcome at 1 year's corrected age was evaluated with the Alberta Infant Motor Scale, the Neuro-Sensory Motor Development Assessment, and the diagnosis of cerebral palsy by the child's pediatrician. At 1 year of age, the Alberta Infant Motor Scale categorized 30 (35%) infants as suspicious/abnormal; the Neuro-Sensory Motor Development Assessment categorized 16 (18%) infants with mild-to-severe motor dysfunction, and 5 (6%) infants were classified with cerebral palsy. White matter abnormality at term and general movements at 1 and 3 months significantly correlated with Alberta Infant Motor Scale and Neuro-Sensory Motor Development Assessment scores at 1 year. White matter abnormality and general movements at 3 months were the only assessments that correlated with cerebral palsy. All assessments had 100% sensitivity in predicting cerebral palsy. White matter abnormality demonstrated the greatest accuracy in predicting combined motor outcomes, with excellent levels of specificity (>90%); however, the sensitivity was low. On the other hand, general movements assessments at 1 month had the highest sensitivity (>80%); however, the overall accuracy was relatively low. Neuroimaging (MRI) and functional (general movements) examinations have important complementary roles in predicting motor development of very preterm infants.
78 FR 8104 - First Phase of the Forest Planning Process for the Bio-Region; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-05
... DEPARTMENT OF AGRICULTURE Forest Service First Phase of the Forest Planning Process for the Bio-Region; Correction AGENCY: USDA, Forest Service. ACTION: Notice; correction. SUMMARY: The Department of... rule entitled First Phase of the Forest Planning Process for the Bio-Region. The document contained...
Lennox, Pamela H; Umedaly, Hamed S; Grant, Raymer P; White, S Adrian; Fitzmaurice, Brett G; Evans, Kenneth G
2006-10-01
The purpose of this study was to assess the validity of using a pulsatile, pressure waveform transduced from the epidural space through an epidural needle or catheter to confirm correct placement for maximal analgesia and to compare 3 different types of catheters' ability to transduce a waveform. A single-center, prospective, randomized trial. A tertiary-referral hospital. Eighty-one patients undergoing posterolateral thoracotomy who required a thoracic epidural catheter for postoperative pain management. Each epidural needle and each epidural catheter was transduced to determine if there was a pulsatile waveform exhibited. Sensitivity of the pulsatile waveform transduced through an epidural needle to identify correct placement of the epidural needle and the sensitivity of each catheter type to identify placement were compared. In 79 of 81 cases (97.5%), the waveform transduced directly through the epidural needle had a pulsatile characteristic as determined by blinded observers. In a total of 53 of 81 epidural catheters (65.4%), the transduced waveform displayed pulsations. Twenty-four of 27 catheters in group S-P/Sims Portex (Smiths Medical MD, Inc, St Paul, MN) (88.9%) transduced a pulsatile tracing from the epidural space, a significantly greater percentage than in the other 2 groups (p = 0.02). The technique of transducing the pressure waveform from the epidural needle inserted in the epidural space is a sensitive and reliable alternative to other techniques for confirmation of correct epidural catheter placement. The technique is simple, sensitive, and inexpensive and uses equipment available in any operating room.
Improving the analysis of slug tests
McElwee, C.D.
2002-01-01
This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.
Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi
2014-02-01
We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.
Design and Test of a 65nm CMOS Front-End with Zero Dead Time for Next Generation Pixel Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaioni, L.; Braga, D.; Christian, D.
This work is concerned with the experimental characterization of a synchronous analog processor with zero dead time developed in a 65 nm CMOS technology, conceived for pixel detectors at the HL-LHC experiment upgrades. It includes a low noise, fast charge sensitive amplifier with detector leakage compensation circuit, and a compact, single ended comparator able to correctly process hits belonging to two consecutive bunch crossing periods. A 2-bit Flash ADC is exploited for digital conversion immediately after the preamplifier. A description of the circuits integrated in the front-end processor and the initial characterization results are provided
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, Orville T.; Olsen, Khris B.; Thomas, May-Lin P.
2008-05-01
A method for the separation and determination of total and isotopic uranium and plutonium by ICP-MS was developed for IAEA samples on cellulose-based media. Preparation of the IAEA samples involved a series of redox chemistries and separations using TRU® resin (Eichrom). The sample introduction system, an APEX nebulizer (Elemental Scientific, Inc), provided enhanced nebulization for a several-fold increase in sensitivity and reduction in background. Application of mass bias (ALPHA) correction factors greatly improved the precision of the data. By combining the enhancements of chemical separation, instrumentation and data processing, detection levels for uranium and plutonium approached high attogram levels.
Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery
NASA Technical Reports Server (NTRS)
Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana
1989-01-01
A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.
Algorithm for atmospheric corrections of aircraft and satellite imagery
NASA Technical Reports Server (NTRS)
Fraser, R. S.; Ferrare, R. A.; Kaufman, Y. J.; Markham, B. L.; Mattoo, S.
1992-01-01
A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 microns. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.
Identification of Terrestrial Reflectance From Remote Sensing
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Nolf, Scott R.; Stacy, Kathryn (Technical Monitor)
2000-01-01
Correcting for atmospheric effects is an essential part of surface-reflectance recovery from radiance measurements. Model-based atmospheric correction techniques enable an accurate identification and classification of terrestrial reflectances from multi-spectral imagery. Successful and efficient removal of atmospheric effects from remote-sensing data is a key factor in the success of Earth observation missions. This report assesses the performance, robustness and sensitivity of two atmospheric-correction and reflectance-recovery techniques as part of an end-to-end simulation of hyper-spectral acquisition, identification and classification.
LANDSAT-D program. Volume 2: Ground segment
NASA Technical Reports Server (NTRS)
1984-01-01
Raw digital data, as received from the LANDSAT spacecraft, cannot generate images that meet specifications. Radiometric corrections must be made to compensate for aging and for differences in sensitivity among the instrument sensors. Geometric corrections must be made to compensate for off-nadir look angle, and to calculate spacecraft drift from its prescribed path. Corrections must also be made for look-angle jitter caused by vibrations induced by spacecraft equipment. The major components of the LANDSAT ground segment and their functions are discussed.
Uwemedimo, Omolara T; Lewis, Todd P; Essien, Elsie A; Chan, Grace J; Nsona, Humphreys; Kruk, Margaret E; Leslie, Hannah H
2018-01-01
Pneumonia remains the leading cause of child mortality in sub-Saharan Africa. The Integrated Management of Childhood Illness (IMCI) strategy was developed to standardise care in low-income and middle-income countries for major childhood illnesses and can effectively improve healthcare worker performance. Suboptimal clinical evaluation can result in missed diagnoses and excess morbidity and mortality. We estimate the sensitivity of pneumonia diagnosis and investigate its determinants among children in Malawi. Data were obtained from the 2013-2014 Service Provision Assessment survey, a census of health facilities in Malawi that included direct observation of care and re-examination of children by trained observers. We calculated sensitivity of pneumonia diagnosis and used multilevel log-binomial regression to assess factors associated with diagnostic sensitivity. 3136 clinical visits for children 2-59 months old were observed at 742 health facilities. Healthcare workers completed an average of 30% (SD 13%) of IMCI guidelines in each encounter. 573 children met the IMCI criteria for pneumonia; 118 (21%) were correctly diagnosed. Advanced practice clinicians were more likely than other providers to diagnose pneumonia correctly (adjusted relative risk 2.00, 95% CI 1.21 to 3.29). Clinical quality was strongly associated with correct diagnosis: sensitivity was 23% in providers at the 75th percentile for guideline adherence compared with 14% for those at the 25th percentile. Contextual factors, facility structural readiness, and training or supervision were not associated with sensitivity. Care quality for Malawian children is poor, with low guideline adherence and missed diagnosis for four of five children with pneumonia. Better sensitivity is associated with provider type and higher adherence to IMCI. Existing interventions such as training and supportive supervision are associated with higher guideline adherence, but are insufficient to meaningfully improve sensitivity. Innovative and scalable quality improvement interventions are needed to strengthen health systems and reduce avoidable child mortality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banik, Subrata; Ravichandran, Lalitha; Brabec, Jiri
2015-03-21
As a further development of the previously introduced a posteriori Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] and [Brabec et al., J. Chem. Phys., 136, 124102 (2012)], we suggest an iterative form of the USS correction by means of correcting effective Hamiltonian matrix elements. We also formulate USS corrections via the left Bloch equations. The convergence of the USS corrections with excitation level towards the FCI limit is also investigated. Various forms of the USS and simplified diagonal USSD corrections at the SD and SD(T) levels are numerically assessed on several model systems and onmore » the ozone and tetramethyleneethane molecules. It is shown that the iterative USS correction can successfully replace the previously developed a posteriori BWCC size-extensivity correction, while it is not sensitive to intruder states and performs well also in other cases when the a posteriori one fails, like e.g. for the asymmetric vibration mode of ozone.« less
Thompson, Kelly; Zhang, Jianying; Zhang, Chunlong
2011-08-01
Effluents from sewage treatment plants (STPs) are known to contain residual micro-contaminants including endocrine disrupting chemicals (EDCs) despite the utilization of various removal processes. Temperature alters the efficacy of removal processes; however, experimental measurements of EDC removal at various temperatures are limited. Extrapolation of EDC behavior over a wide temperature range is possible using available physicochemical property data followed by the correction of temperature dependency. A level II fugacity-based STP model was employed by inputting parameters obtained from the literature and estimated by the US EPA's Estimations Programs Interface (EPI) including EPI's BIOWIN for temperature-dependent biodegradation half-lives. EDC removals in a three-stage activated sludge system were modeled under various temperatures and hydraulic retention times (HRTs) for representative compounds of various properties. Sensitivity analysis indicates that temperature plays a significant role in the model outcomes. Increasing temperature considerably enhances the removal of β-estradiol, ethinyestradiol, bisphenol, phenol, and tetrachloroethylene, but not testosterone with the highest biodegradation rate. The shortcomings of BIOWIN were mitigated by the correction of highly temperature-dependent biodegradation rates using the Arrhenius equation. The model predicts well the effects of operating temperature and HRTs on the removal via volatilization, adsorption, and biodegradation. The model also reveals that an impractically long HRT is needed to achieve a high EDC removal. The STP model along with temperature corrections is able to provide some useful insight into the different patterns of STP performance, and useful operational considerations relevant to EDC removal at winter low temperatures. Copyright © 2011 Elsevier Ltd. All rights reserved.
Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher
2008-09-15
We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.
Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System
NASA Astrophysics Data System (ADS)
Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.
2018-02-01
We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.
White matter correlates of anxiety sensitivity in panic disorder.
Kim, Min-Kyoung; Kim, Borah; Kiu Choi, Tai; Lee, Sang-Hyuk
2017-01-01
Anxiety sensitivity (AS) refers to a fear of anxiety-related sensations and is a dispositional variable especially elevated in patients with panic disorder (PD). Although several functional imaging studies of AS in patients with PD have suggested the presence of altered neural activity in paralimbic areas such as the insula, no study has investigated white matter (WM) alterations in patients with PD in relation to AS. The objective of this study was to investigate the WM correlates of AS in patients with PD. One-hundred and twelve right-handed patients with PD and 48 healthy control (HC) subjects were enrolled in this study. The Anxiety Sensitivity Inventory-Revised (ASI-R), the Panic Disorder Severity Scale (PDSS), the Albany Panic and Phobia Questionnaire (APPQ), the Beck Anxiety Inventory (BAI), and the Beck Depression Inventory (BDI) were administered. Tract-based spatial statistics were used for diffusion tensor magnetic resonance imaging analysis. Among the patients with PD, the ASI-R total scores were significantly correlated with the fractional anisotropy values of the WM regions near the insula, the splenium of the corpus callosum, the tapetum, the fornix/stria terminalis, the posterior limb of the internal capsule, the retrolenticular part of the internal capsule, the posterior thalamic radiation, the sagittal striatum, and the posterior corona radiata located in temporo-parieto-limbic regions and are involved in interoceptive processing (p<0.01; threshold-free cluster enhancement [TFCE]-corrected). These WM regions were also significantly correlated with the APPQ interoceptive avoidance subscale and BDI scores in patients with PD (p<0.01, TFCE-corrected). Correlation analysis among the HC subjects revealed no significant findings. There has been no comparative study on the structural neural correlates of AS in PD. The current study suggests that the WM correlates of AS in patients with PD may be associated with the insula and the adjacent temporo-parieto-limbic WM regions, which may play important roles in interoceptive processing in the brain and in depression in PD. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.
Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less
Liu, Tongran; Xiao, Tong; Shi, Jiannong
2013-02-13
Response inhibition and preattentive processing are two important cognitive abilities for child development, and the current study adopted both behavioral and electrophysiological protocols to examine whether young children's response inhibition correlated with their preattentive processing. A Go/Nogo task was used to explore young children's response inhibition performances and an Oddball task with event-related potential recordings was used to measure their preattentive processing. The behavioral results showed that girls committed significantly fewer commission error rates, which showed that girls had stronger inhibition control abilities than boys. Girls also achieved higher d' scores in the Go/Nogo task, which indicated that they were more sensitive to the stimulus signals than boys. Although the electrophysiological results of preattentive processing did not show any sex differences, the correlation patterns between children's response inhibition and preattentive processing were different between these two groups: the neural response speed of preattentive processing (mismatch negativity peak latency) negatively correlated with girls' commission error rates and positively correlated with boys' correct hit rates. The current findings supported that the preattentive processing correlated with human inhibition control performances, and further showed that girls' better inhibition responses might be because of the influence of their preattentive processing.
Sensitivity of atmospheric correction to loading and model of the aerosol
NASA Astrophysics Data System (ADS)
Bassani, Cristiana; Braga, Federica; Bresciani, Mariano; Giardino, Claudia; Adamo, Maria; Ananasso, Cristina; Alberotanza, Luigi
2013-04-01
The physically-based atmospheric correction requires knowledge of the atmospheric conditions during the remotely data acquisitions [Guanter et al., 2007; Gao et al., 2009; Kotchenova et al. 2009; Bassani et al., 2010]. The propagation of solar radiation in the atmospheric window of visible and near-infrared spectral domain, depends on the aerosol scattering. The effects of solar beam extinction are related to the aerosol loading, by the aerosol optical thickness @550nm (AOT) parameter [Kaufman et al., 1997; Vermote et al., 1997; Kotchenova et al., 2008; Kokhanovsky et al. 2010], and also to the aerosol model. Recently, the atmospheric correction of hyperspectral data is considered sensitive to the micro-physical and optical characteristics of aerosol, as reported in [Bassani et al., 2012]. Within the framework of CLAM-PHYM (Coasts and Lake Assessment and Monitoring by PRISMA HYperspectral Mission) project, funded by Italian Space Agency (ASI), the role of the aerosol model on the accuracy of the atmospheric correction of hyperspectral image acquired over water target is investigated. In this work, the results of the atmospheric correction of HICO (Hyperspectral Imager for the Coastal Ocean) images acquired on Northern Adriatic Sea in the Mediterranean are presented. The atmospheric correction has been performed by an algorithm specifically developed for HICO sensor. The algorithm is based on the equation presented in [Vermote et al., 1997; Bassani et al., 2010] by using the last generation of the Second Simulation of a Satellite Signal in the Solar Spectrum (6S) radiative transfer code [Kotchenova et al., 2008; Vermote et al., 2009]. The sensitive analysis of the atmospheric correction of HICO data is performed with respect to the aerosol optical and micro-physical properties used to define the aerosol model. In particular, a variable mixture of the four basic components: dust- like, oceanic, water-soluble, and soot, has been considered. The water reflectance, obtained from the atmospheric correction with variable model and fixed loading of the aerosol, has been compared. The results highlight the requirements to define the aerosol characteristics, loading and model, to simulate the radiative field in the atmosphere system for an accurate atmospheric correction of hyperspectral data, improving the accuracy of the results for surface reflectance process over water, a dark-target. As conclusion, the aerosol model plays a crucial role for an accurate physically-based atmospheric correction of hyperspectral data over water. Currently, the PRISMA mission provides valuable opportunities to study aerosol and their radiative effects on the hyperspectral data. Bibliography Guanter, L.; Estellès, V.; Moreno, J. Spectral calibration and atmospheric correction of ultra-fine spectral and spatial resolution remote sensing data. Application to CASI-1500 data. Remote Sens. Environ. 2007, 109, 54-65. Gao, B.-C.; Montes, M.J.; Davis, C.O.; Goetz, A.F.H. Atmospheric correction algorithms for hyperspectral remote sensing data of land and ocean. Remote Sens. Environ. 2009, 113, S17-S24. Kotchenova, S. Atmospheric correction for the monitoring of land surfaces. J. Geophys. Res. 2009, 113, D23. Bassani C.; Cavalli, R.M.; Pignatti S. Aerosol optical retrieval and surface reflectance from airborne remote sensing data over land. Sens. 2010, 10, 6421-6438. Kaufman, Y. J., Tanrè, D., Gordon H. R., Nakajima T., Lenoble J., Frouin R., Grassl H., Herman B.M., King M., and Teillet P.M.: Operational remote sensing of tropospheric aerosol over land from EOS moderate resolution imaging spectroradiometer, J. Geophys. Res., 102(D14), 17051-17067, 1997. Vermote, E.F.; Tanrè , D.; Deuzè´ , J.L.; Herman M.; Morcrette J.J. Second simulation of the satellite signal in the solar spectrum, 6S: An overview. IEEE Trans. Geosci. Remote Sens. 1997, 35, 675-686. Kotchenova, S.Y.; Vermote, E.F.; Levy, R.; Lyapustin, A. Radiative transfer codes for atmospheric correction and aerosol retrieval: Intercomparison study. Appl. Optics 2008, 47, 2215-2226. Kokhanovsky A.A., Deuzè J.L., Diner D.J., Dubovik O., Ducos F., Emde C., Garay M.J., Grainger R.G., Heckel A., Herman M., Katsev I.L., Keller J., Levy R., North P.R.J., Prikhach A.S., Rozanov V.V., Sayer A.M., Ota Y., Tanrè D., Thomas G.E., Zege E.P. The inter-comparison of major satellite aerosol retrieval algorithms using simulated intensity and polarization characteristics of reflected light. Atmos. Meas. Tech., 3, 909-932, 2010. Bassani C.; Cavalli, R.M.; Antonelli, P. Influence of aerosol and surface reflectance variability on hyperspectral observed radiance. Atmos. Meas. Tech. 2012, 5, 1193-1203. Vermote , E.F.; Kotchenova, S. Atmospheric correction for the monitoring of land surfaces. J. Geophys. Res. 2009, 113, D23.
Damage Identification of Piles Based on Vibration Characteristics
Zhang, Xiaozhong; Yao, Wenjuan; Chen, Bo; Liu, Dewen
2014-01-01
A method of damage identification of piles was established by using vibration characteristics. The approach focused on the application of the element strain energy and sensitive modals. A damage identification equation of piles was deduced using the structural vibration equation. The equation contained three major factors: change rate of element modal strain energy, damage factor of pile, and sensitivity factor of modal damage. The sensitive modals of damage identification were selected by using sensitivity factor of modal damage firstly. Subsequently, the indexes for early-warning of pile damage were established by applying the change rate of strain energy. Then the technology of computational analysis of wavelet transform was used to damage identification for pile. The identification of small damage of pile was completely achieved, including the location of damage and the extent of damage. In the process of identifying the extent of damage of pile, the equation of damage identification was used in many times. Finally, a stadium project was used as an example to demonstrate the effectiveness of the proposed method of damage identification for piles. The correctness and practicability of the proposed method were verified by comparing the results of damage identification with that of low strain test. The research provided a new way for damage identification of piles. PMID:25506062
Optical devices in highly myopic eyes with low vision: a prospective study.
Scassa, C; Cupo, G; Bruno, M; Iervolino, R; Capozzi, S; Tempesta, C; Giusti, C
2012-01-01
To compare, in relation to the cause of visual impairment, the possibility of rehabilitation, the corrective systems already in use and the finally prescribed optical devices in highly myopic patients with low vision. Some considerations about the rehabilitation of these subjects, especially in relation to their different pathologies, have also been made. 25 highly myopic subjects were enrolled. We evaluated both visual acuity and retinal sensitivity by Scanning Laser Ophthalmoscope (SLO) microperimetry. 20 patients (80%) were rehabilitated by means of monocular optical devices while five patients (20%) were rehabilitated binocularly. We found a good correlation between visual acuity and retinal sensitivity only when the macular pathology did not induce large areas of chorioretinal atrophy that cause lack of stabilization of the preferential retinal locus. In fact, the best results in reading and performing daily visual tasks were obtained by maximizing the residual vision in patients with retinal sensitivity greater than 10 dB. A well circumscribed area of absolute scotoma with a defined new retinal fixation locus could be considered as a positive predictive factor for the final rehabilitation process. A more careful evaluation of visual acuity, retinal sensitivity and preferential fixation locus is necessary in order to prescribe the best optical devices to patients with low vision, thus reducing the impact of the disability on their daily life.
Effects of Meteorological Data Quality on Snowpack Modeling
NASA Astrophysics Data System (ADS)
Havens, S.; Marks, D. G.; Robertson, M.; Hedrick, A. R.; Johnson, M.
2017-12-01
Detailed quality control of meteorological inputs is the most time-intensive component of running the distributed, physically-based iSnobal snow model, and the effect of data quality of the inputs on the model is unknown. The iSnobal model has been run operationally since WY2013, and is currently run in several basins in Idaho and California. The largest amount of user input during modeling is for the quality control of precipitation, temperature, relative humidity, solar radiation, wind speed and wind direction inputs. Precipitation inputs require detailed user input and are crucial to correctly model the snowpack mass. This research applies a range of quality control methods to meteorological input, from raw input with minimal cleaning, to complete user-applied quality control. The meteorological input cleaning generally falls into two categories. The first is global minimum/maximum and missing value correction that could be corrected and/or interpolated with automated processing. The second category is quality control for inputs that are not globally erroneous, yet are still unreasonable and generally indicate malfunctioning measurement equipment, such as temperature or relative humidity that remains constant, or does not correlate with daily trends observed at nearby stations. This research will determine how sensitive model outputs are to different levels of quality control and guide future operational applications.
Li, G; Welander, U; Yoshiura, K; Shi, X-Q; McDavid, W D
2003-11-01
Two digital image processing methods, correction for X-ray attenuation and correction for attenuation and visual response, have been developed. The aim of the present study was to compare digital radiographs before and after correction for attenuation and correction for attenuation and visual response by means of a perceptibility curve test. Radiographs were exposed of an aluminium test object containing holes ranging from 0.03 mm to 0.30 mm with increments of 0.03 mm. Fourteen radiographs were exposed with the Dixi system (Planmeca Oy, Helsinki, Finland) and twelve radiographs were exposed with the F1 iOX system (Fimet Oy, Monninkylä, Finland) from low to high exposures covering the full exposure ranges of the systems. Radiographs obtained from the Dixi and F1 iOX systems were 12 bit and 8 bit images, respectively. Original radiographs were then processed for correction for attenuation and correction for attenuation and visual response. Thus, two series of radiographs were created. Ten viewers evaluated all the radiographs in the same random order under the same viewing conditions. The object detail having the lowest perceptible contrast was recorded for each observer. Perceptibility curves were plotted according to the mean of observer data. The perceptibility curves for processed radiographs obtained with the F1 iOX system are higher than those for originals in the exposure range up to the peak, where the curves are basically the same. For radiographs exposed with the Dixi system, perceptibility curves for processed radiographs are higher than those for originals for all exposures. Perceptibility curves show that for 8 bit radiographs obtained from the F1 iOX system, the contrast threshold was increased in processed radiographs up to the peak, while for 12 bit radiographs obtained with the Dixi system, the contrast threshold was increased in processed radiographs for all exposures. When comparisons were made between radiographs corrected for attenuation and corrected for attenuation and visual response, basically no differences were found. Radiographs processed for correction for attenuation and correction for attenuation and visual response may improve perception, especially for 12 bit originals.
Optical microphone with fiber Bragg grating and signal processing techniques
NASA Astrophysics Data System (ADS)
Tosi, Daniele; Olivero, Massimo; Perrone, Guido
2008-06-01
In this paper, we discuss the realization of an optical microphone array using fiber Bragg gratings as sensing elements. The wavelength shift induced by acoustic waves perturbing the sensing Bragg grating is transduced into an intensity modulation. The interrogation unit is based on a fixed-wavelength laser source and - as receiver - a photodetector with proper amplification; the system has been implemented using devices for standard optical communications, achieving a low-cost interrogator. One of the advantages of the proposed approach is that no voltage-to-strain calibration is required for tracking dynamic shifts. The optical sensor is complemented by signal processing tools, including a data-dependent frequency estimator and adaptive filters, in order to improve the frequency-domain analysis and mitigate the effects of disturbances. Feasibility and performances of the optical system have been tested measuring the output of a loudspeaker. With this configuration, the sensor is capable of correctly detecting sounds up to 3 kHz, with a frequency response that exhibits a top sensitivity within the range 200-500 Hz; single-frequency input sounds inducing an axial strain higher than ~10nɛ are correctly detected. The repeatability range is ~0.1%. The sensor has also been applied for the detection of pulsed stimuli generated from a metronome.
Common Neural Mechanisms Underlying Reversal Learning by Reward and Punishment
Xue, Gui; Xue, Feng; Droutman, Vita; Lu, Zhong-Lin; Bechara, Antoine; Read, Stephen
2013-01-01
Impairments in flexible goal-directed decisions, often examined by reversal learning, are associated with behavioral abnormalities characterized by impulsiveness and disinhibition. Although the lateral orbital frontal cortex (OFC) has been consistently implicated in reversal learning, it is still unclear whether this region is involved in negative feedback processing, behavioral control, or both, and whether reward and punishment might have different effects on lateral OFC involvement. Using a relatively large sample (N = 47), and a categorical learning task with either monetary reward or moderate electric shock as feedback, we found overlapping activations in the right lateral OFC (and adjacent insula) for reward and punishment reversal learning when comparing correct reversal trials with correct acquisition trials, whereas we found overlapping activations in the right dorsolateral prefrontal cortex (DLPFC) when negative feedback signaled contingency change. The right lateral OFC and DLPFC also showed greater sensitivity to punishment than did their left homologues, indicating an asymmetry in how punishment is processed. We propose that the right lateral OFC and anterior insula are important for transforming affective feedback to behavioral adjustment, whereas the right DLPFC is involved in higher level attention control. These results provide insight into the neural mechanisms of reversal learning and behavioral flexibility, which can be leveraged to understand risky behaviors among vulnerable populations. PMID:24349211
Precise predictions for V+jets dark matter backgrounds
NASA Astrophysics Data System (ADS)
Lindert, J. M.; Pozzorini, S.; Boughezal, R.; Campbell, J. M.; Denner, A.; Dittmaier, S.; Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, N.; Huss, A.; Kallweit, S.; Maierhöfer, P.; Mangano, M. L.; Morgan, T. A.; Mück, A.; Petriello, F.; Salam, G. P.; Schönherr, M.; Williams, C.
2017-12-01
High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν {\\bar{ν }})+ jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(ℓ ^+ℓ ^-)+ jet, W(ℓ ν )+ jet and γ + jet production, and extrapolating to the Z(ν {\\bar{ν }})+ jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V+ jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V+ jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν {\\bar{ν }})+ jet background is at the few percent level up to the TeV range.
Schwarzkopp, Tina; Mayr, Ulrich; Jost, Kerstin
2016-01-01
We examined whether a reduced ability to ignore irrelevant information is responsible for the age-related decline of working-memory (WM) functions. By means of event-related brain potentials we will show that filtering is not out of service in older adults but shifted to a later processing stage. Participants performed a visual short-term memory task (change-detection task) in which targets were presented along with distractors. To allow early selection, a cue was presented in advance of each display, indicating where the targets were to appear. Despite this relatively easy selection criterion, older adults’ filtering was delayed as indicated by the amplitude pattern of the contralateral delay activity. Importantly, WM-equated younger adults did not show a delay indicating that the delay is specific to older adults and not a general phenomenon that comes with low WM capacity. Moreover, the analysis of early visual potentials revealed qualitatively different perceptual/attentional processing between the age groups. Young adults exhibited stronger distractor sensitivity that in turn facilitated filtering. Older adults, in contrast, seemed to initially store distractors and to suppress them after the fact. These early-selection versus late-correction modes suggest an age-related shift in the strategy to control the contents of WM. PMID:27253867
Common neural mechanisms underlying reversal learning by reward and punishment.
Xue, Gui; Xue, Feng; Droutman, Vita; Lu, Zhong-Lin; Bechara, Antoine; Read, Stephen
2013-01-01
Impairments in flexible goal-directed decisions, often examined by reversal learning, are associated with behavioral abnormalities characterized by impulsiveness and disinhibition. Although the lateral orbital frontal cortex (OFC) has been consistently implicated in reversal learning, it is still unclear whether this region is involved in negative feedback processing, behavioral control, or both, and whether reward and punishment might have different effects on lateral OFC involvement. Using a relatively large sample (N = 47), and a categorical learning task with either monetary reward or moderate electric shock as feedback, we found overlapping activations in the right lateral OFC (and adjacent insula) for reward and punishment reversal learning when comparing correct reversal trials with correct acquisition trials, whereas we found overlapping activations in the right dorsolateral prefrontal cortex (DLPFC) when negative feedback signaled contingency change. The right lateral OFC and DLPFC also showed greater sensitivity to punishment than did their left homologues, indicating an asymmetry in how punishment is processed. We propose that the right lateral OFC and anterior insula are important for transforming affective feedback to behavioral adjustment, whereas the right DLPFC is involved in higher level attention control. These results provide insight into the neural mechanisms of reversal learning and behavioral flexibility, which can be leveraged to understand risky behaviors among vulnerable populations.
Cognitive processing load across a wide range of listening conditions: insights from pupillometry.
Zekveld, Adriana A; Kramer, Sophia E
2014-03-01
The pupil response to speech masked by interfering speech was assessed across an intelligibility range from 0% to 99% correct. In total, 37 participants aged between 18 and 36 years and with normal hearing were included. Pupil dilation was largest at intermediate intelligibility levels, smaller at high intelligibility, and slightly smaller at very difficult levels. Participants who reported that they often gave up listening at low intelligibility levels had smaller pupil dilations in these conditions. Participants who were good at reading masked text had relatively large pupil dilation when intelligibility was low. We conclude that the pupil response is sensitive to processing load, and possibly reflects cognitive overload in difficult conditions. It seems affected by methodological aspects and individual abilities, but does not reflect subjective ratings. Copyright © 2014 Society for Psychophysiological Research.
Price, performance, and the FDA approval process: the example of home HIV testing.
Paltiel, A David; Pollack, Harold A
2010-01-01
The Food and Drug Administration (FDA) is considering approval of an over-the-counter, rapid HIV test for home use. To support its decision, the FDA seeks evidence of the test's performance. It has asked the manufacturer to conduct field studies of the test's sensitivity and specificity when employed by untrained users. In this article, the authors argue that additional information should be sought to evaluate the prevalence of undetected HIV in the end-user The analytic framework produces the elementary but counterintuitive finding that the performance of the home HIV test- measured in terms of its ability to correctly detect the presence and absence of HIV infection among the people who purchase it-depends critically on the manufacturer's retail price. This finding has profound implications for the FDA's approval process.
Atmospheric refraction correction for Ka-band blind pointing on the DSS-13 beam waveguide antenna
NASA Technical Reports Server (NTRS)
Perez-Borroto, I. M.; Alvarez, L. S.
1992-01-01
An analysis of the atmospheric refraction corrections at the DSS-13 34-m diameter beam waveguide (BWG) antenna for the period Jul. - Dec. 1990 is presented. The current Deep Space Network (DSN) atmospheric refraction model and its sensitivity with respect to sensor accuracy are reviewed. Refraction corrections based on actual atmospheric parameters are compared with the DSS-13 station default corrections for the six-month period. Average blind-pointing improvement during the worst month would have amounted to 5 mdeg at 10 deg elevation using actual surface weather values. This would have resulted in an average gain improvement of 1.1 dB.
MR-assisted PET Motion Correction for eurological Studies in an Integrated MR-PET Scanner
Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B.; Michel, Christian J.; El Fakhri, Georges; Schmand, Matthias; Sorensen, A. Gregory
2011-01-01
Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MR data can be used for motion tracking. In this work, a novel data processing and rigid-body motion correction (MC) algorithm for the MR-compatible BrainPET prototype scanner is described and proof-of-principle phantom and human studies are presented. Methods To account for motion, the PET prompts and randoms coincidences as well as the sensitivity data are processed in the line or response (LOR) space according to the MR-derived motion estimates. After sinogram space rebinning, the corrected data are summed and the motion corrected PET volume is reconstructed from these sinograms and the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed and motion estimates were obtained using two high temporal resolution MR-based motion tracking techniques. Results After accounting for the physical mismatch between the two scanners, perfectly co-registered MR and PET volumes are reproducibly obtained. The MR output gates inserted in to the PET list-mode allow the temporal correlation of the two data sets within 0.2 s. The Hoffman phantom volume reconstructed processing the PET data in the LOR space was similar to the one obtained processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the novel MC algorithm. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 seconds and 20 ms, respectively. Substantially improved PET images with excellent delineation of specific brain structures were obtained after applying the MC using these MR-based estimates. Conclusion A novel MR-based MC algorithm was developed for the integrated MR-PET scanner. High temporal resolution MR-derived motion estimates (obtained while simultaneously acquiring anatomical or functional MR data) can be used for PET MC. An MR-based MC has the potential to improve PET as a quantitative method, increasing its reliability and reproducibility which could benefit a large number of neurological applications. PMID:21189415
Ward, Ryan D; Odum, Amy L
2008-01-01
The present experiment developed a methodology for assessing sensitivity of conditional-discrimination performance to within-session variation of reinforcer frequency. Four pigeons responded under a multiple schedule of matching-to-sample components in which the ratio of reinforcers for correct S1 and S2 responses was varied across components within session. Initially, five components, each arranging a different reinforcer-frequency ratio (from 1∶9 to 9∶1), were presented randomly within a session. Under this condition, sensitivity to reinforcer frequency was low. Sensitivity failed to improve after extended exposure to this condition, and under a condition in which only three reinforcer-frequency ratios were varied within session. In a later condition, three reinforcer-frequency ratios were varied within session, but the reinforcer-frequency ratio in effect was differentially signaled within each component. Under this condition, values of sensitivity were similar to those traditionally obtained when reinforcer-frequency ratios for correct responses are varied across conditions. The effects of signaled vs. unsignaled reinforcer-frequency ratios were replicated in two subsequent conditions. The present procedure could provide a practical alternative to parametric variation of reinforcer frequency across conditions and may be useful in characterizing the effects of a variety of manipulations on steady-state sensitivity to reinforcer frequency. PMID:19070338
Status of the DKIST system for solar adaptive optics
NASA Astrophysics Data System (ADS)
Johnson, Luke C.; Cummings, Keith; Drobilek, Mark; Johansson, Erik; Marino, Jose; Richards, Kit; Rimmele, Thomas; Sekulic, Predrag; Wöger, Friedrich
2016-07-01
When the Daniel K. Inouye Solar Telescope (DKIST) achieves first light in 2019, it will deliver the highest spatial resolution images of the solar atmosphere ever recorded. Additionally, the DKIST will observe the Sun with unprecedented polarimetric sensitivity and spectral resolution, spurring a leap forward in our understanding of the physical processes occurring on the Sun. The DKIST wavefront correction system will provide active alignment control and jitter compensation for all six of the DKIST science instruments. Five of the instruments will also be fed by a conventional adaptive optics (AO) system, which corrects for high frequency jitter and atmospheric wavefront disturbances. The AO system is built around an extended-source correlating Shack-Hartmann wavefront sensor, a Physik Instrumente fast tip-tilt mirror (FTTM) and a Xinetics 1600-actuator deformable mirror (DM), which are controlled by an FPGA-based real-time system running at 1975 Hz. It is designed to achieve on-axis Strehl of 0.3 at 500 nm in median seeing (r0 = 7 cm) and Strehl of 0.6 at 630 nm in excellent seeing (r0 = 20 cm). The DKIST wavefront correction team has completed the design phase and is well into the fabrication phase. The FTTM and DM have both been delivered to the DKIST laboratory in Boulder, CO. The real-time controller has been completed and is able to read out the camera and deliver commands to the DM with a total latency of approximately 750 μs. All optics and optomechanics, including many high-precision custom optics, mounts, and stages, are completed or nearing the end of the fabrication process and will soon undergo rigorous acceptance testing. Before installing the wavefront correction system at the telescope, it will be assembled as a testbed in the laboratory. In the lab, performance tests beginning with component-level testing and continuing to full system testing will ensure that the wavefront correction system meets all performance requirements. Further work in the lab will focus on fine-tuning our alignment and calibration procedures so that installation and alignment on the summit will proceed as efficiently as possible.
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-12
... impact of eliminating the correction window from the electronic grant application submission process on... process a temporary error correction window to ensure a smooth and successful transition for applicants. This window provides applicants a period of time beyond the grant application due date to correct any...
A Temperature-Based Gain Calibration Technique for Precision Radiometry
NASA Astrophysics Data System (ADS)
Parashare, Chaitali Ravindra
Detecting extremely weak signals in radio astronomy demands high sensitivity and stability of the receivers. The gain of a typical radio astronomy receiver is extremely large, and therefore, even very small gain instabilities can dominate the received noise power and degrade the instrument sensitivity. Hence, receiver stabilization is of prime importance. Gain variations occur mainly due to ambient temperature fluctuations. We take a new approach to receiver stabilization, which makes use of active temperature monitoring and corrects for the gain fluctuations in post processing. This approach is purely passive and does not include noise injection or switching for calibration. This system is to be used for the Precision Array for Probing the Epoch of Reionization (PAPER), which is being developed to detect the extremely faint neutral hydrogen (HI) signature of the Epoch of Reionization (EoR). The epoch of reionization refers to the period in the history of the Universe when the first stars and galaxies started to form. When there are N antenna elements in the case of a large scale array, all elements may not be subjected to the same environmental conditions at a given time. Hence, we expect to mitigate the gain variations by monitoring the physical temperature of each element of the array. This stabilization approach will also benefit experiments like EDGES (Experiment to Detect the Global EoR Signature) and DARE (Dark Ages Radio Explorer), which involve a direct measurement of the global 21 cm signal using a single antenna element and hence, require an extremely stable system. This dissertation focuses on the development and evaluation of a calibration technique that compensates for the gain variations caused due to temperature fluctuations of the RF components. It carefully examines the temperature dependence of the components in the receiver chain. The results from the first-order field instrument, called a Gainometer (GoM), highlight the issue with the cable temperature which varies significantly with different climatic conditions. The model used to correct for gain variations is presented. We describe the measurements performed to verify the model. RFI is a major issue at low frequencies, which makes these kind of measurements extremely challenging. We discuss the careful measures required to mitigate the errors due to the unwanted interference. In the case of the laboratory measurements, the model follows closely with the measured power, and shows an improvement in the gain stability by a factor of ˜ 46, when the corrections are applied. The gain stability (rms to mean) improves from 1 part in 32 to 1 part in 1500. The field measurements suggest that correcting for cable temperature variations is challenging. The improvement in the gain stability is by a factor of ˜ 4.3, when the RF front end components are situated out in the field. The results are analyzed using the statistical methods such as the standard error of the mean, the run test, skewness, and kurtosis. These tests demonstrate the normal distribution of the process when the corrections are applied and confirm an effective gain bias removal. The results obtained from the sky observation using a single antenna element are compared before and after applying the corrections. Several days data verify that the power fluctuations are significantly reduced after the gain corrections are applied.
Power corrections in the N -jettiness subtraction scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Power corrections in the N -jettiness subtraction scheme
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
2017-03-30
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Beato, Maria Soledad
2016-01-01
Memory researchers have long been captivated by the nature of memory distortions and have made efforts to identify the neural correlates of true and false memories. However, the underlying mechanisms of avoiding false memories by correctly rejecting related lures remains underexplored. In this study, we employed a variant of the Deese/Roediger-McDermott paradigm to explore neural signatures of committing and avoiding false memories. ERP were obtained for True recognition, False recognition, Correct rejection of new items, and, more importantly, Correct rejection of related lures. With these ERP data, early-frontal, left-parietal, and late right-frontal old/new effects (associated with familiarity, recollection, and monitoring processes, respectively) were analysed. Results indicated that there were similar patterns for True and False recognition in all three old/new effects analysed in our study. Also, False recognition and Correct rejection of related lures activities seemed to share common underlying familiarity-based processes. The ERP similarities between False recognition and Correct rejection of related lures disappeared when recollection processes were examined because only False recognition presented a parietal old/new effect. This finding supported the view that actual false recollections underlie false memories, providing evidence consistent with previous behavioural research and with most ERP and neuroimaging studies. Later, with the onset of monitoring processes, False recognition and Correct rejection of related lures waveforms presented, again, clearly dissociated patterns. Specifically, False recognition and True recognition showed more positive going patterns than Correct rejection of related lures signal and Correct rejection of new items signature. Since False recognition and Correct rejection of related lures triggered familiarity-recognition processes, our results suggest that deciding which items are studied is based more on recollection processes, which are later supported by monitoring processes. Results are discussed in terms of Activation-Monitoring Framework and Fuzzy Trace-Theory, the most prominent explanatory theories of false memory raised with the Deese/Roediger-McDermott paradigm. PMID:27711125
Cadavid, Sara; Beato, Maria Soledad
2016-01-01
Memory researchers have long been captivated by the nature of memory distortions and have made efforts to identify the neural correlates of true and false memories. However, the underlying mechanisms of avoiding false memories by correctly rejecting related lures remains underexplored. In this study, we employed a variant of the Deese/Roediger-McDermott paradigm to explore neural signatures of committing and avoiding false memories. ERP were obtained for True recognition, False recognition, Correct rejection of new items, and, more importantly, Correct rejection of related lures. With these ERP data, early-frontal, left-parietal, and late right-frontal old/new effects (associated with familiarity, recollection, and monitoring processes, respectively) were analysed. Results indicated that there were similar patterns for True and False recognition in all three old/new effects analysed in our study. Also, False recognition and Correct rejection of related lures activities seemed to share common underlying familiarity-based processes. The ERP similarities between False recognition and Correct rejection of related lures disappeared when recollection processes were examined because only False recognition presented a parietal old/new effect. This finding supported the view that actual false recollections underlie false memories, providing evidence consistent with previous behavioural research and with most ERP and neuroimaging studies. Later, with the onset of monitoring processes, False recognition and Correct rejection of related lures waveforms presented, again, clearly dissociated patterns. Specifically, False recognition and True recognition showed more positive going patterns than Correct rejection of related lures signal and Correct rejection of new items signature. Since False recognition and Correct rejection of related lures triggered familiarity-recognition processes, our results suggest that deciding which items are studied is based more on recollection processes, which are later supported by monitoring processes. Results are discussed in terms of Activation-Monitoring Framework and Fuzzy Trace-Theory, the most prominent explanatory theories of false memory raised with the Deese/Roediger-McDermott paradigm.
NASA Technical Reports Server (NTRS)
Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.
1994-01-01
This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.
SU-F-T-180: Evaluation of a Scintillating Screen Detector for Proton Beam QA and Acceptance Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghebremedhin, A; Taber, M; Koss, P
2016-06-15
Purpose: To test the performance of a commercial scintillating screen detector for acceptance testing and Quality Assurance of a proton pencil beam scanning system. Method: The detector (Lexitek DRD 400) has 40cm × 40cm field, uses a thin scintillator imaged onto a 16-bit scientific CCD with ∼0.5mm resolution. A grid target and LED illuminators are provided for spatial calibration and relative gain correction. The detector mounts to the nozzle with micron precision. Tools are provided for image processing and analysis of single or multiple Gaussian spots. Results: The bias and gain of the detector were studied to measure repeatability andmore » accuracy. Gain measurements were taken with the LED illuminators to measure repeatability and variation of the lens-CCD pair as a function with f-stop. Overall system gain was measured with a passive scattering (broad) beam whose shape is calibrated with EDR film placed in front of the scintillator. To create a large uniform field, overlapping small fields were recorded with the detector translated laterally and stitched together to cover the full field. Due to the long exposures required to obtain multiple spills of the synchrotron and very high detector sensitivity, borated polyethylene shielding was added to reduce direct radiation events hitting the CCD. Measurements with a micro ion chamber were compared to the detector’s spot profile. Software was developed to process arrays of Gaussian spots and to correct for radiation events. Conclusion: The detector background has a fixed bias, a small component linear in time, and is easily corrected. The gain correction method was validated with 2% accuracy. The detector spot profile matches the micro ion chamber data over 4 orders of magnitude. The multiple spot analyses can be easily used with plan data for measuring pencil beam uniformity and for regular QA comparison.« less
Geoscientific process monitoring with positron emission tomography (GeoPET)
NASA Astrophysics Data System (ADS)
Kulenkampff, Johannes; Gründig, Marion; Zakhnini, Abdelhamid; Lippmann-Pipke, Johanna
2016-08-01
Transport processes in geomaterials can be observed with input-output experiments, which yield no direct information on the impact of heterogeneities, or they can be assessed by model simulations based on structural imaging using µ-CT. Positron emission tomography (PET) provides an alternative experimental observation method which directly and quantitatively yields the spatio-temporal distribution of tracer concentration. Process observation with PET benefits from its extremely high sensitivity together with a resolution that is acceptable in relation to standard drill core sizes. We strongly recommend applying high-resolution PET scanners in order to achieve a resolution on the order of 1 mm. We discuss the particularities of PET applications in geoscientific experiments (GeoPET), which essentially are due to high material density. Although PET is rather insensitive to matrix effects, mass attenuation and Compton scattering have to be corrected thoroughly in order to derive quantitative values. Examples of process monitoring of advection and diffusion processes with GeoPET illustrate the procedure and the experimental conditions, as well as the benefits and limits of the method.
MGmapper: Reference based mapping and taxonomy annotation of metagenomics sequence reads
Lukjancenko, Oksana; Thomsen, Martin Christen Frølund; Maddalena Sperotto, Maria; Lund, Ole; Møller Aarestrup, Frank; Sicheritz-Pontén, Thomas
2017-01-01
An increasing amount of species and gene identification studies rely on the use of next generation sequence analysis of either single isolate or metagenomics samples. Several methods are available to perform taxonomic annotations and a previous metagenomics benchmark study has shown that a vast number of false positive species annotations are a problem unless thresholds or post-processing are applied to differentiate between correct and false annotations. MGmapper is a package to process raw next generation sequence data and perform reference based sequence assignment, followed by a post-processing analysis to produce reliable taxonomy annotation at species and strain level resolution. An in-vitro bacterial mock community sample comprised of 8 genuses, 11 species and 12 strains was previously used to benchmark metagenomics classification methods. After applying a post-processing filter, we obtained 100% correct taxonomy assignments at species and genus level. A sensitivity and precision at 75% was obtained for strain level annotations. A comparison between MGmapper and Kraken at species level, shows MGmapper assigns taxonomy at species level using 84.8% of the sequence reads, compared to 70.5% for Kraken and both methods identified all species with no false positives. Extensive read count statistics are provided in plain text and excel sheets for both rejected and accepted taxonomy annotations. The use of custom databases is possible for the command-line version of MGmapper, and the complete pipeline is freely available as a bitbucked package (https://bitbucket.org/genomicepidemiology/mgmapper). A web-version (https://cge.cbs.dtu.dk/services/MGmapper) provides the basic functionality for analysis of small fastq datasets. PMID:28467460
MGmapper: Reference based mapping and taxonomy annotation of metagenomics sequence reads.
Petersen, Thomas Nordahl; Lukjancenko, Oksana; Thomsen, Martin Christen Frølund; Maddalena Sperotto, Maria; Lund, Ole; Møller Aarestrup, Frank; Sicheritz-Pontén, Thomas
2017-01-01
An increasing amount of species and gene identification studies rely on the use of next generation sequence analysis of either single isolate or metagenomics samples. Several methods are available to perform taxonomic annotations and a previous metagenomics benchmark study has shown that a vast number of false positive species annotations are a problem unless thresholds or post-processing are applied to differentiate between correct and false annotations. MGmapper is a package to process raw next generation sequence data and perform reference based sequence assignment, followed by a post-processing analysis to produce reliable taxonomy annotation at species and strain level resolution. An in-vitro bacterial mock community sample comprised of 8 genuses, 11 species and 12 strains was previously used to benchmark metagenomics classification methods. After applying a post-processing filter, we obtained 100% correct taxonomy assignments at species and genus level. A sensitivity and precision at 75% was obtained for strain level annotations. A comparison between MGmapper and Kraken at species level, shows MGmapper assigns taxonomy at species level using 84.8% of the sequence reads, compared to 70.5% for Kraken and both methods identified all species with no false positives. Extensive read count statistics are provided in plain text and excel sheets for both rejected and accepted taxonomy annotations. The use of custom databases is possible for the command-line version of MGmapper, and the complete pipeline is freely available as a bitbucked package (https://bitbucket.org/genomicepidemiology/mgmapper). A web-version (https://cge.cbs.dtu.dk/services/MGmapper) provides the basic functionality for analysis of small fastq datasets.
The purpose of this SOP is to define the procedure to provide a standard method for correcting electronic data errors. The procedure defines (1) when electronic data may be corrected and by whom, (2) the process of correcting the data, and (3) the process of documenting the corr...
Nassiri, Nader; Sheibani, Kourosh; Azimi, Abbas; Khosravi, Farinaz Mahmoodi; Heravian, Javad; Yekta, Abasali; Moghaddam, Hadi Ostadi; Nassiri, Saman; Yasseri, Mehdi; Nassiri, Nariman
2015-10-01
To compare refractive outcomes, contrast sensitivity, higher-order aberrations (HOAs), and patient satisfaction after photorefractive keratectomy for correction of moderate myopia with two methods: tissue saving versus wavefront optimized. In this prospective, comparative study, 152 eyes (80 patients) with moderate myopia with and without astigmatism were randomly divided into two groups: the tissue-saving group (Technolas 217z Zyoptix laser; Bausch & Lomb, Rochester, NY) (76 eyes of 39 patients) or the wavefront-optimized group (WaveLight Allegretto Wave Eye-Q laser; Alcon Laboratories, Inc., Fort Worth, TX) (76 eyes of 41 patients). Preoperative and 3-month postoperative refractive outcomes, contrast sensitivity, HOAs, and patient satisfaction were compared between the two groups. The mean spherical equivalent was -4.50 ± 1.02 diopters. No statistically significant differences were detected between the groups in terms of uncorrected and corrected distance visual acuity and spherical equivalent preoperatively and 3 months postoperatively. No statistically significant differences were seen in the amount of preoperative to postoperative contrast sensitivity changes between the two groups in photopic and mesopic conditions. HOAs and Q factor increased in both groups postoperatively (P = .001), with the tissue-saving method causing more increases in HOAs (P = .007) and Q factor (P = .039). Patient satisfaction was comparable between both groups. Both platforms were effective in correcting moderate myopia with or without astigmatism. No difference in refractive outcome, contrast sensitivity changes, and patient satisfaction between the groups was observed. Postoperatively, the tissue-saving method caused a higher increase in HOAs and Q factor compared to the wavefront-optimized method, which could be due to larger optical zone sizes in the tissue-saving group. Copyright 2015, SLACK Incorporated.
Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.
2011-01-01
Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660
ERIC Educational Resources Information Center
Stefanou, Charis; Revesz, Andrea
2015-01-01
This article reports on a classroom-based study that investigated the effectiveness of direct written corrective feedback in relation to learner differences in grammatical sensitivity and knowledge of metalanguage. The study employed a pretest-posttest-delayed posttest design with two treatment sessions. Eighty-nine Greek English as a foreign…
Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1997-01-01
Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.
Li, Yihe; Li, Bofeng; Gao, Yang
2015-01-01
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400
Li, Yihe; Li, Bofeng; Gao, Yang
2015-11-30
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.
NASA Astrophysics Data System (ADS)
Chegwidden, O.; Nijssen, B.; Pytlak, E.
2017-12-01
Any model simulation has errors, including errors in meteorological data, process understanding, model structure, and model parameters. These errors may express themselves as bias, timing lags, and differences in sensitivity between the model and the physical world. The evaluation and handling of these errors can greatly affect the legitimacy, validity and usefulness of the resulting scientific product. In this presentation we will discuss a case study of handling and communicating model errors during the development of a hydrologic climate change dataset for the Pacific Northwestern United States. The dataset was the result of a four-year collaboration between the University of Washington, Oregon State University, the Bonneville Power Administration, the United States Army Corps of Engineers and the Bureau of Reclamation. Along the way, the partnership facilitated the discovery of multiple systematic errors in the streamflow dataset. Through an iterative review process, some of those errors could be resolved. For the errors that remained, honest communication of the shortcomings promoted the dataset's legitimacy. Thoroughly explaining errors also improved ways in which the dataset would be used in follow-on impact studies. Finally, we will discuss the development of the "streamflow bias-correction" step often applied to climate change datasets that will be used in impact modeling contexts. We will describe the development of a series of bias-correction techniques through close collaboration among universities and stakeholders. Through that process, both universities and stakeholders learned about the others' expectations and workflows. This mutual learning process allowed for the development of methods that accommodated the stakeholders' specific engineering requirements. The iterative revision process also produced a functional and actionable dataset while preserving its scientific merit. We will describe how encountering earlier techniques' pitfalls allowed us to develop improved methods for scientists and practitioners alike.
Finite coupling corrections to holographic predictions for hot QCD
Waeber, Sebastian; Schafer, Andreas; Vuorinen, Aleksi; ...
2015-11-13
Finite ’t Hooft coupling corrections to multiple physical observables in strongly coupled N=4 supersymmetric Yang-Mills plasma are examined, in an attempt to assess the stability of the expansion in inverse powers of the ’t Hooft coupling λ. Observables considered include thermodynamic quantities, transport coefficients, and quasinormal mode frequencies. Furthermore large λ expansions for quasinormal mode frequencies are notably less well behaved than the expansions of other quantities, we find that a partial resummation of higher order corrections can significantly reduce the sensitivity of the results to the value of λ.
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu
2017-04-01
Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009
Early effects of duloxetine on emotion recognition in healthy volunteers
Bamford, Susan; Penton-Voak, Ian; Pinkney, Verity; Baldwin, David S; Munafò, Marcus R; Garner, Matthew
2015-01-01
The serotonin-noradrenaline reuptake inhibitor (SNRI) duloxetine is an effective treatment for major depression and generalised anxiety disorder. Neuropsychological models of antidepressant drug action suggest therapeutic effects might be mediated by the early correction of maladaptive biases in emotion processing, including the recognition of emotional expressions. Sub-chronic administration of duloxetine (for two weeks) produces adaptive changes in neural circuitry implicated in emotion processing; however, its effects on emotional expression recognition are unknown. Forty healthy participants were randomised to receive either 14 days of duloxetine (60 mg/day, titrated from 30 mg after three days) or matched placebo (with sham titration) in a double-blind, between-groups, repeated-measures design. On day 0 and day 14 participants completed a computerised emotional expression recognition task that measured sensitivity to the six primary emotions. Thirty-eight participants (19 per group) completed their course of tablets and were included in the analysis. Results provide evidence that duloxetine, compared to placebo, may reduce the accurate recognition of sadness. Drug effects were driven by changes in participants’ ability to correctly detect subtle expressions of sadness, with greater change observed in the placebo relative to the duloxetine group. These effects occurred in the absence of changes in mood. Our preliminary findings require replication, but complement recent evidence that sadness recognition is a therapeutic target in major depression, and a mechanism through which SNRIs could resolve negative biases in emotion processing to achieve therapeutic effects. PMID:25759400
Kim, Yoon-Chul; Nielsen, Jon-Fredrik; Nayak, Krishna S
2008-01-01
To develop a method that automatically corrects ghosting artifacts due to echo-misalignment in interleaved gradient-echo echo-planar imaging (EPI) in arbitrary oblique or double-oblique scan planes. An automatic ghosting correction technique was developed based on an alternating EPI acquisition and the phased-array ghost elimination (PAGE) reconstruction method. The direction of k-space traversal is alternated at every temporal frame, enabling lower temporal-resolution ghost-free coil sensitivity maps to be dynamically estimated. The proposed method was compared with conventional one-dimensional (1D) phase correction in axial, oblique, and double-oblique scan planes in phantom and cardiac in vivo studies. The proposed method was also used in conjunction with two-fold acceleration. The proposed method with nonaccelerated acquisition provided excellent suppression of ghosting artifacts in all scan planes, and was substantially more effective than conventional 1D phase correction in oblique and double-oblique scan planes. The feasibility of real-time reconstruction using the proposed technique was demonstrated in a scan protocol with 3.1-mm spatial and 60-msec temporal resolution. The proposed technique with nonaccelerated acquisition provides excellent ghost suppression in arbitrary scan orientations without a calibration scan, and can be useful for real-time interactive imaging, in which scan planes are frequently changed with arbitrary oblique orientations.
Layout optimization of DRAM cells using rigorous simulation model for NTD
NASA Astrophysics Data System (ADS)
Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe
2014-03-01
DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.
DNA damage and gene therapy of xeroderma pigmentosum, a human DNA repair-deficient disease.
Dupuy, Aurélie; Sarasin, Alain
2015-06-01
Xeroderma pigmentosum (XP) is a genetic disease characterized by hypersensitivity to ultra-violet and a very high risk of skin cancer induction on exposed body sites. This syndrome is caused by germinal mutations on nucleotide excision repair genes. No cure is available for these patients except a complete protection from all types of UV radiations. We reviewed the various techniques to complement or to correct the genetic defect in XP cells. We, particularly, developed the correction of XP-C skin cells using the fidelity of the homologous recombination pathway during repair of double-strand break (DSB) in the presence of XPC wild type sequences. We used engineered nucleases (meganuclease or TALE nuclease) to induce a DSB located at 90 bp of the mutation to be corrected. Expression of specific TALE nuclease in the presence of a repair matrix containing a long stretch of homologous wild type XPC sequences allowed us a successful gene correction of the original TG deletion found in numerous North African XP patients. Some engineered nucleases are sensitive to epigenetic modifications, such as cytosine methylation. In case of methylated sequences to be corrected, modified nucleases or demethylation of the whole genome should be envisaged. Overall, we showed that specifically-designed TALE-nuclease allowed us to correct a 2 bp deletion in the XPC gene leading to patient's cells proficient for DNA repair and showing normal UV-sensitivity. The corrected gene is still in the same position in the human genome and under the regulation of its physiological promoter. This result is a first step toward gene therapy in XP patients. Copyright © 2014 Elsevier B.V. All rights reserved.
IATA for skin sensitization potential – 1 out of 2 or 2 out of 3? ...
To meet EU regulatory requirements and to avoid or minimize animal testing, there is a need for non-animal methods to assess skin sensitization potential. Given the complexity of the skin sensitization endpoint, there is an expectation that integrated testing and assessment approaches (IATA) will need to be developed which rely on assays representing key events in the pathway. Three non-animal assays have been formally validated: the direct peptide reactivity assay (DPRA), the KeratinoSensTM assay and the h-CLAT assay. At the same time, there have been many efforts to develop IATA with the “2 out of 3” approach attracting much attention whereby a chemical is classified on the basis of the majority outcome. A set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the 3 individual non-animal assays, their binary combinations and the ‘2 out of 3’ approach. The analysis revealed that the most predictive approach was to use both the DPRA and h-CLAT: 1. Perform DPRA – if positive, classify as a sensitizer; 2. If negative, perform h-CLAT – a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 83% (LLNA) and 93% (human) of the non-sensitizer predictions were correct, in contrast to the ‘2 out of 3’ approach which had 69% (LLNA) and 79% (human) of non-sensitizer predictions correct. The views expressed are those of the authors and do not ne
NASA Astrophysics Data System (ADS)
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.; Chang, Ping
2017-10-01
We alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ˜8-50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the global average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.
Pattern-projected schlieren imaging method using a diffractive optics element
NASA Astrophysics Data System (ADS)
Min, Gihyeon; Lee, Byung-Tak; Kim, Nac Woo; Lee, Munseob
2018-04-01
We propose a novel schlieren imaging method by projecting a random dot pattern, which is generated in a light source module that includes a diffractive optical element. All apparatuses are located in the source side, which leads to one-body sensor applications. This pattern is distorted by the deflections of schlieren objects such that the displacement vectors of random dots in the pixels can be obtained using the particle image velocity algorithm. The air turbulences induced by a burning candle, boiling pot, heater, and gas torch were successfully imaged, and it was shown that imaging up to a size of 0.7 m × 0.57 m is possible. An algorithm to correct the non-uniform sensitivity according to the position of a schlieren object was analytically derived. This algorithm was applied to schlieren images of lenses. Comparing the corrected versions to the original schlieren images, we showed a corrected uniform sensitivity of 14.15 times on average.
Surface determination through atomically resolved secondary-electron imaging
Ciston, J.; Brown, H. G.; D'Alfonso, A. J.; Koirala, P.; Ophus, C.; Lin, Y.; Suzuki, Y.; Inada, H.; Zhu, Y.; Allen, L. J.; Marks, L. D.
2015-01-01
Unique determination of the atomic structure of technologically relevant surfaces is often limited by both a need for homogeneous crystals and ambiguity of registration between the surface and bulk. Atomically resolved secondary-electron imaging is extremely sensitive to this registration and is compatible with faceted nanomaterials, but has not been previously utilized for surface structure determination. Here we report a detailed experimental atomic-resolution secondary-electron microscopy analysis of the c(6 × 2) reconstruction on strontium titanate (001) coupled with careful simulation of secondary-electron images, density functional theory calculations and surface monolayer-sensitive aberration-corrected plan-view high-resolution transmission electron microscopy. Our work reveals several unexpected findings, including an amended registry of the surface on the bulk and strontium atoms with unusual seven-fold coordination within a typically high surface coverage of square pyramidal TiO5 units. Dielectric screening is found to play a critical role in attenuating secondary-electron generation processes from valence orbitals. PMID:26082275
NASA Technical Reports Server (NTRS)
Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.
2013-01-01
We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.
NASA Astrophysics Data System (ADS)
Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki
2011-08-01
A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.
Surface determination through atomically resolved secondary-electron imaging
Ciston, J.; Brown, H. G.; D’Alfonso, A. J.; ...
2015-06-17
We report that unique determination of the atomic structure of technologically relevant surfaces is often limited by both a need for homogeneous crystals and ambiguity of registration between the surface and bulk. Atomically resolved secondary-electron imaging is extremely sensitive to this registration and is compatible with faceted nanomaterials, but has not been previously utilized for surface structure determination. Here we show a detailed experimental atomic-resolution secondary-electron microscopy analysis of the c(6 x 2) reconstruction on strontium titanate (001) coupled with careful simulation of secondary-electron images, density functional theory calculations and surface monolayer-sensitive aberration-corrected plan-view high-resolution transmission electron microscopy. Our workmore » reveals several unexpected findings, including an amended registry of the surface on the bulk and strontium atoms with unusual seven-fold coordination within a typically high surface coverage of square pyramidal TiO 5 units. Lastly, dielectric screening is found to play a critical role in attenuating secondary-electron generation processes from valence orbitals.« less
Chang, Hing-Chiu; Hui, Edward S; Chiu, Pui-Wai; Liu, Xiaoxi; Chen, Nan-Kuei
2018-05-01
Three-dimensional (3D) multiplexed sensitivity encoding and reconstruction (3D-MUSER) algorithm is proposed to reduce aliasing artifacts and signal corruption caused by inter-shot 3D phase variations in 3D diffusion-weighted echo planar imaging (DW-EPI). 3D-MUSER extends the original framework of multiplexed sensitivity encoding (MUSE) to a hybrid k-space-based reconstruction, thereby enabling the correction of inter-shot 3D phase variations. A 3D single-shot EPI navigator echo was used to measure inter-shot 3D phase variations. The performance of 3D-MUSER was evaluated by analyses of point-spread function (PSF), signal-to-noise ratio (SNR), and artifact levels. The efficacy of phase correction using 3D-MUSER for different slab thicknesses and b-values were investigated. Simulations showed that 3D-MUSER could eliminate artifacts because of through-slab phase variation and reduce noise amplification because of SENSE reconstruction. All aliasing artifacts and signal corruption in 3D interleaved DW-EPI acquired with different slab thicknesses and b-values were reduced by our new algorithm. A near-whole brain single-slab 3D DTI with 1.3-mm isotropic voxel acquired at 1.5T was successfully demonstrated. 3D phase correction for 3D interleaved DW-EPI data is made possible by 3D-MUSER, thereby improving feasible slab thickness and maximum feasible b-value. Magn Reson Med 79:2702-2712, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III
2005-11-01
Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.
Johnson, Karen; Toto, Tami; Jensen, Michael
2011-05-03
For the Ka ARM Zenith Radar (KAZR) data stream, kazrmd.b1 (md=moderate sensitivity), produces significant detection mask, corrects reflectivity for gaseous attenuation, and dealiases mean Doppler velocity.
Johnson, Karen; Toto, Tami; Jensen, Michael
2011-01-18
For the Ka ARM Zenith Radar (KAZR) data stream, kazrhi.b1 (hi=high sensitivity), produces significant detection mask, corrects reflectivity for gaseous attenuation, and dealiases mean Doppler velocity.
Johnson, Karen; Toto, Tami; Jensen, Michael
2011-01-18
For the Ka ARM Zenith Radar (KAZR) data stream, kazrge.b1 (ge=general sensitivity), produces significant detection mask, corrects reflectivity for gaseous attenuation, and dealiases mean Doppler velocity.
Larsen, Lesli H; Angquist, Lars; Vimaleswaran, Karani S; Hager, Jörg; Viguerie, Nathalie; Loos, Ruth J F; Handjieva-Darlenska, Teodora; Jebb, Susan A; Kunesova, Marie; Larsen, Thomas M; Martinez, J Alfredo; Papadaki, Angeliki; Pfeiffer, Andreas F H; van Baak, Marleen A; Sørensen, Thorkild Ia; Holst, Claus; Langin, Dominique; Astrup, Arne; Saris, Wim H M
2012-05-01
Differences in the interindividual response to dietary intervention could be modified by genetic variation in nutrient-sensitive genes. This study examined single nucleotide polymorphisms (SNPs) in presumed nutrient-sensitive candidate genes for obesity and obesity-related diseases for main and dietary interaction effects on weight, waist circumference, and fat mass regain over 6 mo. In total, 742 participants who had lost ≥ 8% of their initial body weight were randomly assigned to follow 1 of 5 different ad libitum diets with different glycemic indexes and contents of dietary protein. The SNP main and SNP-diet interaction effects were analyzed by using linear regression models, corrected for multiple testing by using Bonferroni correction and evaluated by using quantile-quantile (Q-Q) plots. After correction for multiple testing, none of the SNPs were significantly associated with weight, waist circumference, or fat mass regain. Q-Q plots showed that ALOX5AP rs4769873 showed a higher observed than predicted P value for the association with less waist circumference regain over 6 mo (-3.1 cm/allele; 95% CI: -4.6, -1.6; P/Bonferroni-corrected P = 0.000039/0.076), independently of diet. Additional associations were identified by using Q-Q plots for SNPs in ALOX5AP, TNF, and KCNJ11 for main effects; in LPL and TUB for glycemic index interaction effects on waist circumference regain; in GHRL, CCK, MLXIPL, and LEPR on weight; in PPARC1A, PCK2, ALOX5AP, PYY, and ADRB3 on waist circumference; and in PPARD, FABP1, PLAUR, and LPIN1 on fat mass regain for dietary protein interaction. The observed effects of SNP-diet interactions on weight, waist, and fat mass regain suggest that genetic variation in nutrient-sensitive genes can modify the response to diet. This trial was registered at clinicaltrials.gov as NCT00390637.
Conformational antibody binding to a native, cell-free expressed GPCR in block copolymer membranes.
de Hoog, Hans-Peter M; Lin JieRong, Esther M; Banerjee, Sourabh; Décaillot, Fabien M; Nallani, Madhavan
2014-01-01
G-protein coupled receptors (GPCRs) play a key role in physiological processes and are attractive drug targets. Their biophysical characterization is, however, highly challenging because of their innate instability outside a stabilizing membrane and the difficulty of finding a suitable expression system. We here show the cell-free expression of a GPCR, CXCR4, and its direct embedding in diblock copolymer membranes. The polymer-stabilized CXCR4 is readily immobilized onto biosensor chips for label-free binding analysis. Kinetic characterization using a conformationally sensitive antibody shows the receptor to exist in the correctly folded conformation, showing binding behaviour that is commensurate with heterologously expressed CXCR4.
Conformational Antibody Binding to a Native, Cell-Free Expressed GPCR in Block Copolymer Membranes
de Hoog, Hans-Peter M.; Lin JieRong, Esther M.; Banerjee, Sourabh; Décaillot, Fabien M.; Nallani, Madhavan
2014-01-01
G-protein coupled receptors (GPCRs) play a key role in physiological processes and are attractive drug targets. Their biophysical characterization is, however, highly challenging because of their innate instability outside a stabilizing membrane and the difficulty of finding a suitable expression system. We here show the cell-free expression of a GPCR, CXCR4, and its direct embedding in diblock copolymer membranes. The polymer-stabilized CXCR4 is readily immobilized onto biosensor chips for label-free binding analysis. Kinetic characterization using a conformationally sensitive antibody shows the receptor to exist in the correctly folded conformation, showing binding behaviour that is commensurate with heterologously expressed CXCR4. PMID:25329156
Multiplex Detection of Toxigenic Penicillium Species.
Rodríguez, Alicia; Córdoba, Juan J; Rodríguez, Mar; Andrade, María J
2017-01-01
Multiplex PCR-based methods for simultaneous detection and quantification of different mycotoxin-producing Penicillia are useful tools to be used in food safety programs. These rapid and sensitive techniques allow taking corrective actions during food processing or storage for avoiding accumulation of mycotoxins in them. In this chapter, three multiplex PCR-based methods to detect at least patulin- and ochratoxin A-producing Penicillia are detailed. Two of them are different multiplex real-time PCR suitable for monitoring and quantifying toxigenic Penicillium using the nonspecific dye SYBR Green and specific hydrolysis probes (TaqMan). All of them successfully use the same target genes involved in the biosynthesis of such mycotoxins for designing primers and/or probes.
Pictogram Evaluation and Authoring Collaboration Environment
Kim, Hyeoneui; Tamayo, Dorothy; Muhkin, Michael; Kim, Jaemin; Lam, Julius; Ohno-Machado, Lucila; Aronoff-Spencer, Eliah
2012-01-01
Studies showed benefits of using pictograms in health communication such as improved recall and comprehension of health instructions. Pictograms are culturally sensitive thus need to be rigorously validated to ensure they convey the intended meaning correctly to the targeted population. The infeasibility of manually creating pictograms and the lack of robust means to store and validate pictograms are potential barriers to the wider adoption of pictograms in health communication. To address these challenges, we created an open access web-based tool, PEACE (Pictogram Evaluation and Authoring Collaboration Environment) as a part of SHINE (Sustainable Health Informatics and Networking Environment) initiatives. We report the development process and the preliminary evaluation results of PEACE in this paper. PMID:24199088
Prediction of recirculation zones in isothermal coaxial jet flows relevant to combustors
NASA Technical Reports Server (NTRS)
Nallasamy, M.
1987-01-01
The characteristics of the recirculation zones in confined coaxial turbulent jets are investigated numerically employing the kappa - epsilon turbulence model. The geometrical arrangement corresponds to the experimental study of Owen (AIAA J. 1976) and the investigation is undertaken to provide information for isothermal flow relevant to combustor flows. For the first time, the shape, size, and location of the recirculation zones for the above experimental configuration are correctly predicted. The processes leading to the observed results are explained. Detailed comparisons of the prediction with measurements are made. It is shown that the recirculation zones are very sensitive to the central jet exit configuration and the velocity ratio of the jets.
[Pathophysiological aspects of wound healing in normal and diabetic foot].
Maksimova, N V; Lyundup, A V; Lubimov, R O; Melnichenko, G A; Nikolenko, V N
2014-01-01
The main cause of long-term healing of ulcers in patients with diabetic foot is considered to be direct mechanical damage when walking due to reduced sensitivity to due to neuropathy, hyperglycemia, infection and peripheral artery disease. These factors determine the standard approaches to the treatment of diabeticfoot, which include: offloading, glycemic control, debridement of ulcers, antibiotic therapy and revascularization. Recently, however, disturbances in the healing process of the skin in diabetes recognized an additional factor affecting the timing of healing patients with diabetic foot. Improved understanding and correction of cellular, molecular and biochemical abnormalities in chronic wound in combination with standard of care for affords new ground for solving the problem of ulcer healing in diabetes.
Irving, Greg; Holden, John; Stevens, Richard; McManus, Richard J
2016-11-03
To determine the diagnostic accuracy of different methods of blood pressure (BP) measurement compared with reference standards for the diagnosis of hypertension in patients with obesity with a large arm circumference. Systematic review with meta-analysis with hierarchical summary receiver operating characteristic models. Bland-Altman analyses where individual patient data were available. Methodological quality appraised using Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS2) criteria. MEDLINE, EMBASE, Cochrane, DARE, Medion and Trip databases were searched. Cross-sectional, randomised and cohort studies of diagnostic test accuracy that compared any non-invasive BP tests (upper arm, forearm, wrist, finger) with an appropriate reference standard (invasive BP, correctly fitting upper arm cuff, ambulatory BP monitoring) in primary care were included. 4037 potentially relevant papers were identified. 20 studies involving 26 different comparisons met the inclusion criteria. Individual patient data were available from 4 studies. No studies satisfied all QUADAS2 criteria. Compared with the reference test of invasive BP, a correctly fitting upper arm BP cuff had a sensitivity of 0.87 (0.79 to 0.93) and a specificity of 0.85 (0.64 to 0.95); insufficient evidence was available for other comparisons to invasive BP. Compared with the reference test of a correctly fitting upper arm cuff, BP measurement at the wrist had a sensitivity of 0.92 (0.64 to 0.99) and a specificity of 0.92 (0.85 to 0.87). Measurement with an incorrectly fitting standard cuff had a sensitivity of 0.73 (0.67 to 0.78) and a specificity of 0.76 (0.69 to 0.82). Measurement at the forearm had a sensitivity of 0.84 (0.71 to 0.92) and a specificity 0.75 of (0.66 to 0.83). Bland-Altman analysis of individual patient data from 3 studies comparing wrist and upper arm BP showed a mean difference of 0.46 mm Hg for systolic BP measurement and 2.2 mm Hg for diastolic BP measurement. BP measurement with a correctly fitting upper arm cuff is sufficiently sensitive and specific to diagnose hypertension in patients with obesity with a large upper arm circumference. If a correctly fitting upper arm cuff cannot be applied, an incorrectly fitting standard size cuff should not be used and BP measurement at the wrist should be considered. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Holden, John
2016-01-01
Objective To determine the diagnostic accuracy of different methods of blood pressure (BP) measurement compared with reference standards for the diagnosis of hypertension in patients with obesity with a large arm circumference. Design Systematic review with meta-analysis with hierarchical summary receiver operating characteristic models. Bland-Altman analyses where individual patient data were available. Methodological quality appraised using Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS2) criteria. Data sources MEDLINE, EMBASE, Cochrane, DARE, Medion and Trip databases were searched. Eligibility criteria Cross-sectional, randomised and cohort studies of diagnostic test accuracy that compared any non-invasive BP tests (upper arm, forearm, wrist, finger) with an appropriate reference standard (invasive BP, correctly fitting upper arm cuff, ambulatory BP monitoring) in primary care were included. Results 4037 potentially relevant papers were identified. 20 studies involving 26 different comparisons met the inclusion criteria. Individual patient data were available from 4 studies. No studies satisfied all QUADAS2 criteria. Compared with the reference test of invasive BP, a correctly fitting upper arm BP cuff had a sensitivity of 0.87 (0.79 to 0.93) and a specificity of 0.85 (0.64 to 0.95); insufficient evidence was available for other comparisons to invasive BP. Compared with the reference test of a correctly fitting upper arm cuff, BP measurement at the wrist had a sensitivity of 0.92 (0.64 to 0.99) and a specificity of 0.92 (0.85 to 0.87). Measurement with an incorrectly fitting standard cuff had a sensitivity of 0.73 (0.67 to 0.78) and a specificity of 0.76 (0.69 to 0.82). Measurement at the forearm had a sensitivity of 0.84 (0.71 to 0.92) and a specificity 0.75 of (0.66 to 0.83). Bland-Altman analysis of individual patient data from 3 studies comparing wrist and upper arm BP showed a mean difference of 0.46 mm Hg for systolic BP measurement and 2.2 mm Hg for diastolic BP measurement. Conclusions BP measurement with a correctly fitting upper arm cuff is sufficiently sensitive and specific to diagnose hypertension in patients with obesity with a large upper arm circumference. If a correctly fitting upper arm cuff cannot be applied, an incorrectly fitting standard size cuff should not be used and BP measurement at the wrist should be considered. PMID:27810973
Simulations of arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE
Xie, Shaocheng; Boyle, James; Klein, Stephen A.; ...
2008-02-27
[1] Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of themore » boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. Furthermore, this paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.« less
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.
2011-12-01
A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.
Simulations of Arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE
NASA Astrophysics Data System (ADS)
Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Liu, Xiaohong; Ghan, Steven
2008-02-01
Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of the boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. This paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.
The AOLI low-order non-linear curvature wavefront sensor: laboratory and on-sky results
NASA Astrophysics Data System (ADS)
Crass, Jonathan; King, David; MacKay, Craig
2014-08-01
Many adaptive optics (AO) systems in use today require the use of bright reference objects to determine the effects of atmospheric distortions. Typically these systems use Shack-Hartmann Wavefront sensors (SHWFS) to distribute incoming light from a reference object between a large number of sub-apertures. Guyon et al. evaluated the sensitivity of several different wavefront sensing techniques and proposed the non-linear Curvature Wavefront Sensor (nlCWFS) offering improved sensitivity across a range of orders of distortion. On large ground-based telescopes this can provide nearly 100% sky coverage using natural guide stars. We present work being undertaken on the nlCWFS development for the Adaptive Optics Lucky Imager (AOLI) project. The wavefront sensor is being developed as part of a low-order adaptive optics system for use in a dedicated instrument providing an AO corrected beam to a Lucky Imaging based science detector. The nlCWFS provides a total of four reference images on two photon-counting EMCCDs for use in the wavefront reconstruction process. We present results from both laboratory work using a calibration system and the first on-sky data obtained with the nlCWFS at the 4.2 metre William Herschel Telescope, La Palma. In addition, we describe the updated optical design of the wavefront sensor, strategies for minimising intrinsic effects and methods to maximise sensitivity using photon-counting detectors. We discuss on-going work to develop the high speed reconstruction algorithm required for the nlCWFS technique. This includes strategies to implement the technique on graphics processing units (GPUs) and to minimise computing overheads to obtain a prior for a rapid convergence of the wavefront reconstruction. Finally we evaluate the sensitivity of the wavefront sensor based upon both data and low-photon count strategies.
Wang, Menghua
2007-03-20
In the remote sensing of the ocean near-surface properties, it is essential to derive accurate water-leaving radiance spectra through the process of the atmospheric correction. The atmospheric correction algorithm for Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) and Moderate Resolution Imaging Spectroradiometer (MODIS) uses two near-infrared (NIR) bands at 765 and 865 nm (748 and 869 nm for MODIS) for retrieval of aerosol properties with assumption of the black ocean at the NIR wavelengths. Modifications are implemented to account for some of the NIR ocean contributions for the productive but not very turbid waters. For turbid waters in the coastal regions, however, the ocean could have significant contributions in the NIR, leading to significant errors in the satellite-derived ocean water-leaving radiances. For the shortwave infrared (SWIR) wavelengths (approximately > 1000 nm), water has significantly larger absorption than those for the NIR bands. Thus the black ocean assumption at the SWIR bands is generally valid for turbid waters. In addition, for future sensors, it is also useful to include the UV bands to better quantify the ocean organic and inorganic materials, as well as for help in atmospheric correction. Simulations are carried out to evaluate the performance of atmospheric correction for nonabsorbing and weakly absorbing aerosols using the NIR bands and various combinations of the SWIR bands for deriving the water-leaving radiances at the UV (340 nm) and visible wavelengths. Simulations show that atmospheric correction using the SWIR bands can generally produce results comparable to atmospheric correction using the NIR bands. In particular, the water-leaving radiance at the UV band (340 nm) can also be derived accurately. The results from a sensitivity study for the required sensor noise equivalent reflectance, (NE Delta rho), [or the signal-to-noise ratio (SNR)] for the NIR and SWIR bands are provided and discussed.
NASA Astrophysics Data System (ADS)
Thamm, Thomas; Geh, Bernd; Djordjevic Kaufmann, Marija; Seltmann, Rolf; Bitensky, Alla; Sczyrba, Martin; Samy, Aravind Narayana
2018-03-01
Within the current paper, we will concentrate on the well-known CDC technique from Carl Zeiss to improve the CD distribution of the wafer by improving the reticle CDU and its impact on hotspots and Litho process window. The CDC technique uses an ultra-short pulse laser technology, which generates a micro-level Shade-In-Element (also known as "Pixels") into the mask quartz bulk material. These scatter centers are able to selectively attenuate certain areas of the reticle in higher resolution compared to other methods and thus improve the CD uniformity. In a first section, we compare the CDC technique with scanner dose correction schemes. It becomes obvious, that the CDC technique has unique advantages with respect to spatial resolution and intra-field flexibility over scanner correction schemes, however, due to the scanner flexibility across wafer both methods are rather complementary than competing. In a second section we show that a reference feature based correction scheme can be used to improve the CDU of a full chip with multiple different features that have different MEEF and dose sensitivities. In detail we will discuss the impact of forward scattering light originated by the CDC pixels on the illumination source and the related proximity signature. We will show that the impact on proximity is small compared to the CDU benefit of the CDC technique. Finally we show to which extend the reduced variability across reticle will result in a better common electrical process window of a whole chip design on the whole reticle field on wafer. Finally we will discuss electrical verification results between masks with purposely made bad CDU that got repaired by the CDC technique versus inherently good "golden" masks on a complex logic device. No yield difference is observed between the repaired bad masks and the masks with good CDU.
NASA Astrophysics Data System (ADS)
Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.
2015-06-01
Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.
Methane from the Tropospheric Emission Spectrometer (TES)
NASA Technical Reports Server (NTRS)
Payne, Vivienne; Worden, John; Kulawik, Susan; Frankenberg, Christian; Bowman, Kevin; Wecht, Kevin
2012-01-01
TES V5 CH4 captures latitudinal gradients, regional variability and interannual variation in the free troposphere. V5 joint retrievals offer improved sensitivity to lower troposphere. Time series extends from 2004 to present. V5 reprocessing in progress. Upper tropospheric bias. Mitigated by N2O correction. Appears largely spatially uniform, so can be corrected. How to relate free-tropospheric values to surface emissions.
Sabesan, Ramkumar; Barbot, Antoine; Yoon, Geunyoung
2017-03-01
Highly aberrated keratoconic (KC) eyes do not elicit the expected visual advantage from customized optical corrections. This is attributed to the neural insensitivity arising from chronic visual experience with poor retinal image quality, dominated by low spatial frequencies. The goal of this study was to investigate if targeted perceptual learning with adaptive optics (AO) can stimulate neural plasticity in these highly aberrated eyes. The worse eye of 2 KC subjects was trained in a contrast threshold test under AO correction. Prior to training, tumbling 'E' visual acuity and contrast sensitivity at 4, 8, 12, 16, 20, 24 and 28 c/deg were measured in both the trained and untrained eyes of each subject with their routine prescription and with AO correction for a 6mm pupil. The high spatial frequency requiring 50% contrast for detection with AO correction was picked as the training frequency. Subjects were required to train on a contrast detection test with AO correction for 1h for 5 consecutive days. During each training session, threshold contrast measurement at the training frequency with AO was conducted. Pre-training measures were repeated after the 5 training sessions in both eyes (i.e., post-training). After training, contrast sensitivity under AO correction improved on average across spatial frequency by a factor of 1.91 (range: 1.77-2.04) and 1.75 (1.22-2.34) for the two subjects. This improvement in contrast sensitivity transferred to visual acuity with the two subjects improving by 1.5 and 1.3 lines respectively with AO following training. One of the two subjects denoted an interocular transfer of training and an improvement in performance with their routine prescription post-training. This training-induced visual benefit demonstrates the potential of AO as a tool for neural rehabilitation in patients with abnormal corneas. Moreover, it reveals a sufficient degree of neural plasticity in normally developed adults who have a long history of abnormal visual experience due to optical imperfections. Copyright © 2016 Elsevier Ltd. All rights reserved.
Approximation of the Newton Step by a Defect Correction Process
NASA Technical Reports Server (NTRS)
Arian, E.; Batterman, A.; Sachs, E. W.
1999-01-01
In this paper, an optimal control problem governed by a partial differential equation is considered. The Newton step for this system can be computed by solving a coupled system of equations. To do this efficiently with an iterative defect correction process, a modifying operator is introduced into the system. This operator is motivated by local mode analysis. The operator can be used also for preconditioning in Generalized Minimum Residual (GMRES). We give a detailed convergence analysis for the defect correction process and show the derivation of the modifying operator. Numerical tests are done on the small disturbance shape optimization problem in two dimensions for the defect correction process and for GMRES.
Ozone Correction for AM0 Calibrated Solar Cells for the Aircraft Method
NASA Technical Reports Server (NTRS)
Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Rieke, William J.; Blankenship, Kurt S.
2002-01-01
The aircraft solar cell calibration method has provided cells calibrated to space conditions for 37 years. However, it is susceptible to systematic errors due to ozone concentrations in the stratosphere. The present correction procedure applies a 1 percent increase to the measured I(sub SC) values. High band-gap cells are more sensitive to ozone absorbed wavelengths (0.4 to 0.8 microns) so it becomes important to reassess the correction technique. This paper evaluates the ozone correction to be 1+O3xFo, where O3 is the total ozone along the optical path, and Fo is 29.8 x 10(exp -6)/du for a Silicon solar cell, 42.6 x 10(exp -6)/du for a GaAs cell and 57.2 x 10(exp -6)/du for an InGaP cell. These correction factors work best to correct data points obtained during the flight rather than as a correction to the final result.
Unabated global surface temperature warming: evaluating the evidence
NASA Astrophysics Data System (ADS)
Karl, T. R.; Arguez, A.
2015-12-01
New insights related to time-dependent bias corrections in global surface temperatures have led to higher rates of warming over the past few decades than previously reported in the IPCC Fifth Assessment Report (2014). Record high global temperatures in the past few years have also contributed to larger trends. The combination of these factors and new analyses of the rate of temperature change show unabated global warming since at least the mid-Twentieth Century. New time-dependent bias corrections account for: (1) differences in temperatures measured from ships and drifting buoys; (2) improved corrections to ship measured temperatures; and (3) the larger rates of warming in polar regions (particularly the Arctic). Since 1951, the period over which IPCC (2014) attributes over half of the observed global warming to human causes, it is shown that there has been a remarkably robust and sustained warming, punctuated with inter-annual and decadal variability. This finding is confirmed through simple trend analysis and Empirical Mode Decomposition (EMD). Trend analysis however, especially for decadal trends, is sensitive to selection bias of beginning and ending dates. EMD has no selection bias. Additionally, it can highlight both short- and long-term processes affecting the global temperature times series since it addresses both non-linear and non-stationary processes. For the new NOAA global temperature data set, our analyses do not support the notion of a hiatus or slowing of long-term global warming. However, sub-decadal periods of little (or no warming) and rapid warming can also be found, clearly showing the impact of inter-annual and decadal variability that previously has been attributed to both natural and human-induced non-greenhouse forcings.
Change Processes in Organization.
ERIC Educational Resources Information Center
1998
This document contains four papers from a symposium on change processes in organizations. "Mid-stream Corrections: Decisions Leaders Make during Organizational Change Processes" (David W. Frantz) analyzes three organizational leaders to determine whether and how they take corrective actions or adapt their decision-making processes when…
Bowmaker, J K; Dartnall, H J; Mollon, J D
1980-01-01
1. Microspectrophotometric measurements reveal four classes of photoreceptor in the retina of the cynomolgus monkey, Macaca fascicularis, which is known to possess colour vision similar to that of a normal human trichromat. 2. Although the eyes were removed in bright illumination, the densities of pigment were comparable to those we have measured in dark-adapted rhesus retinae. 3. The mean wave-lengths of peak sensitivity (lambda max) for the four classes of photoreceptor were 415, 500, 535 and 567 nm. 4. The band widths of the absorbance spectra decreased linearly as the wave-number of peak sensitivity decreased. 5. If, by assuming a reasonable value for the axial density of the rod outer segment and correcting for lens absorption, a spectral sensitivity for human vision is reconstructed from the P500 pigment, it is found to be systematically broader than the CIE scotopic sensitivity function. 6. Given explicit assumptions, it is possible from the P535 and P567 pigments to reconstruct human psychophysical sensitivities that resemble the pi 4 and pi 5 mechanisms of W. S. Stiles. 7. Although the P415 pigment has a lambda max much shorter than that of the psychophysically measured blue mechanisms, the two spectral-sensitivity functions are brought into proximity when the microspectrophotometric data are corrected for absorption by the optic media. Images Fig. 1 PMID:6767023
Do 'literate' pigeons (Columba livia) show mirror-word generalization?
Scarf, Damian; Corballis, Michael C; Güntürkün, Onur; Colombo, Michael
2017-09-01
Many children pass through a mirror stage in reading, where they write individual letters or digits in mirror and find it difficult to correctly utilize letters that are mirror images of one another (e.g., b and d). This phenomenon is thought to reflect the fact that the brain does not naturally discriminate left from right. Indeed, it has been argued that reading acquisition involves the inhibition of this default process. In the current study, we tested the ability of literate pigeons, which had learned to discriminate between 30 and 62 words from 7832 nonwords, to discriminate between words and their mirror counterparts. Subjects were sensitive to the left-right orientation of the individual letters, but not the order of letters within a word. This finding may reflect the fact that, in the absence of human-unique top-down processes, the inhibition of mirror generalization may be limited.
Probability effects on stimulus evaluation and response processes
NASA Technical Reports Server (NTRS)
Gehring, W. J.; Gratton, G.; Coles, M. G.; Donchin, E.
1992-01-01
This study investigated the effects of probability information on response preparation and stimulus evaluation. Eight subjects responded with one hand to the target letter H and with the other to the target letter S. The target letter was surrounded by noise letters that were either the same as or different from the target letter. In 2 conditions, the targets were preceded by a warning stimulus unrelated to the target letter. In 2 other conditions, a warning letter predicted that the same letter or the opposite letter would appear as the imperative stimulus with .80 probability. Correct reaction times were faster and error rates were lower when imperative stimuli confirmed the predictions of the warning stimulus. Probability information affected (a) the preparation of motor responses during the foreperiod, (b) the development of expectancies for a particular target letter, and (c) a process sensitive to the identities of letter stimuli but not to their locations.
NASA Technical Reports Server (NTRS)
Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver
2012-01-01
Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.
The Snapshot A Star SurveY (SASSY)
NASA Astrophysics Data System (ADS)
Garani, Jasmine I.; Nielsen, Eric; Marchis, Franck; Liu, Michael C.; Macintosh, Bruce; Rajan, Abhijith; De Rosa, Robert J.; Jinfei Wang, Jason; Esposito, Thomas M.; Best, William M. J.; Bowler, Brendan; Dupuy, Trent; Ruffio, Jean-Baptiste
2018-01-01
The Snapshot A Star Survey (SASSY) is an adaptive optics survey conducted using NIRC2 on the Keck II telescope to search for young, self-luminous planets and brown dwarfs (M > 5MJup) around high mass stars (M > 1.5 M⊙). We present the results of a custom data reduction pipeline developed for the coronagraphic observations of our 200 target stars. Our data analysis method includes basic near infrared data processing (flat-field correction, bad pixel removal, distortion correction) as well as performing PSF subtraction through a Reference Differential Imaging algorithm based on a library of PSFs derived from the observations using the pyKLIP routine. We present the results from the pipeline of a few stars from the survey with analysis of candidate companions. SASSY is sensitive to companions 600,000 times fainter than the host star withint the inner few arcseconds, allowing us to detect companions with masses ~8MJup at age 110 Myr. This work was supported by the Leadership Alliance's Summer Research Early Identification Program at Stanford University, the NSF REU program at the SETI Institute and NASA grant NNX14AJ80G.
Gómez Palacios, Angel; Gómez Zábala, Jesús; Gutiérrez, María Teresa; Expósito, Amaya; Barrios, Borja; Zorraquino, Angel; Taibo, Miguel Angel; Iturburu, Ignacio
2006-12-01
1. To assess the sensitivity of scintigraphy using methoxy isobutyl isonitrile (MIBI). 2. To compare its resolution with that of ultrasound (US) and computerized axial tomography (CAT). 3. To use its diagnostic reliability to determine whether selective approaches can be used to treat hyperparathyroidism (HPT). A study of 76 patients who underwent surgery for HPT between 1996 and 2005 was performed. MIBI scintigraphy and cervical US were used for whole-body scanning in all patients; CAT was used in 47 patients. Intraoperative and postoperative biopsies were used for final evaluation of the tests, after visualization and surgical extirpation. The results of scintigraphy were positive in 65 patients (85.52%). The diagnosis was correct in all of the single images. Multiple images were due to hyperplasia and parathyroid adenomas with thyroid disease (5.2%). Three images, incorrectly classified as negative (3.94%), were positive. The sensitivity of US was 63% and allowed detection of three MIBI-negative adenomas (4%). CAT was less sensitive (55%), but detected a further three MIBI-negative adenomas (4%). 1. The sensitivity of MIBI reached 89.46%. In the absence of thyroid nodules, MIBI diagnosed 100% of single lesions. Pathological thyroid processes produced false-positive results (5.2%) and there were diagnostic errors (4%). 2. MIBI scintigraphy was more sensitive than US and CAT. 3. Positive, single image scintigraphy allows a selective cervical approach. US and CAT may help to save a further 8% of patients (with negative scintigraphy).
Tension fracture of laminates for transport fuselage. Part 1: Material screening
NASA Technical Reports Server (NTRS)
Walker, T. H.; Avery, W. B.; Ilcewicz, L. B.; Poe, C. C., Jr.; Harris, C. E.
1992-01-01
Transport fuselage structures are designed to contain pressure following a large penetrating damage event. Applications of composites to fuselage structures require a database and supporting analysis on tension damage tolerance. Tests with 430 fracture specimens were used to accomplish the following: (1) identify critical material and laminate variables affecting notch sensitivity; (2) evaluate composite failure criteria; and (3) recommend a screening test method. Variables studied included fiber type, matrix toughness, lamination manufacturing process, and intraply hybridization. The laminates found to have the lowest notch sensitivity were manufactured using automated tow placement. This suggests a possible relationship between the stress distribution and repeatable levels of material inhomogeneity that are larger than found in traditional tape laminates. Laminates with the highest notch sensitivity consisted of toughened matrix materials that were resistant to a splitting phenomena that reduces stress concentrations in major load bearing plies. Parameters for conventional fracture criteria were found to increase with crack length for the smallest notch sizes studied. Most material and laminate combinations followed less than a square root singularity for the largest crack sizes studied. Specimen geometry, notch type, and notch size were evaluated in developing a screening test procedure. Traitional methods of correcting for specimen finite width were found to be lacking. Results indicate that a range of notch sizes must be tested to determine notch sensitivity. Data for a single small notch size (0.25 in. diameter) was found to give no indication of the sensitivity of a particular material and laminate layup to larger notch sizes.
Sensitivity of blackbody effective emissivity to wavelength and temperature: By genetic algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ejigu, E. K.; Liedberg, H. G.
A variable-temperature blackbody (VTBB) is used to calibrate an infrared radiation thermometer (pyrometer). The effective emissivity (ε{sub eff}) of a VTBB is dependent on temperature and wavelength other than the geometry of the VTBB. In the calibration process the effective emissivity is often assumed to be constant within the wavelength and temperature range. There are practical situations where the sensitivity of the effective emissivity needs to be known and correction has to be applied. We present a method using a genetic algorithm to investigate the sensitivity of the effective emissivity to wavelength and temperature variation. Two matlab® programs are generated:more » the first to model the radiance temperature calculation and the second to connect the model to the genetic algorithm optimization toolbox. The effective emissivity parameter is taken as a chromosome and optimized at each wavelength and temperature point. The difference between the contact temperature (reading from a platinum resistance thermometer or liquid in glass thermometer) and radiance temperature (calculated from the ε{sub eff} values) is used as an objective function where merit values are calculated and best fit ε{sub eff} values selected. The best fit ε{sub eff} values obtained as a solution show how sensitive they are to temperature and wavelength parameter variation. Uncertainty components that arise from wavelength and temperature variation are determined based on the sensitivity analysis. Numerical examples are considered for illustration.« less
Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Pinheiro da Silva, Joyce; Wertzner, Haydée Fiszbein
2017-05-22
The purpose of the study was to determine the sensitivity and specificity, and to establish cutoff points for the severity index Percentage of Consonants Correct - Revised (PCC-R) in Brazilian Portuguese-speaking children with and without speech sound disorders. 72 children between 5:00 and 7:11 years old - 36 children without speech and language complaints and 36 children with speech sound disorders. The PCC-R was applied to the figure naming and word imitation tasks that are part of the ABFW Child Language Test. Results were statistically analyzed. The ROC curve was performed and sensitivity and specificity values of the index were verified. The group of children without speech sound disorders presented greater PCC-R values in both tasks, regardless of the gender of the participants. The cutoff value observed for the picture naming task was 93.4%, with a sensitivity value of 0.89 and specificity of 0.94 (age independent). For the word imitation task, results were age-dependent: for age group ≤6:5 years old, the cutoff value was 91.0% (sensitivity of 0.77 and specificity of 0.94) and for age group >6:5 years-old, the cutoff value was 93.9% (sensitivity of 0.93 and specificity of 0.94). Given the high sensitivity and specificity of PCC-R, we can conclude that the index was effective in discriminating and identifying children with and without speech sound disorders.
On the Validity and Sensitivity of the Phonics Screening Check: Erratum and Further Analysis
ERIC Educational Resources Information Center
Gilchrist, James M.; Snowling, Margaret J.
2018-01-01
Duff, Mengoni, Bailey and Snowling ("Journal of Research in Reading," 38: 109-123; 2015) evaluated the sensitivity and specificity of the phonics screening check against two reference standards. This report aims to correct a minor data error in the original article and to present further analysis of the data. The methods used are…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Requests for further information; submissions of classified, privileged, and sensitive information. 52.43 Section 52.43 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY PERSONNEL BOARD FOR CORRECTION OF MILITARY RECORDS OF THE COAST GUARD Submissions by...
ERIC Educational Resources Information Center
Crichton, Hazel; Templeton, Brian; Valdera, Francisco
2017-01-01
Anxiety about "performing" in a foreign language in front of classmates may inhibit learners' contributions in the modern languages class through fear of embarrassment over possible error production. The issue of "face", perceived social standing in the eyes of others, presents a sensitive matter for young adolescents…
Identification and topographic localization of metallic foreign bodies by metal detector.
Muensterer, Oliver J; Joppich, Ingolf
2004-08-01
Exact localization of ingested metal objects is necessary to guide therapy. This study prospectively evaluates the accuracy of foreign body (FB) identification and localization by metal detector (MTD) in a systematic topographic fashion. Patients who presented after an alleged or witnessed metal FB ingestion were scanned with an MTD. In case of a positive signal, the location was recorded in a topographic diagram, and radiographs were obtained. The diagnostic accuracy of the MTD scan for FB identification and topographic localization was determined by chi(2) analysis, and concordance was calculated by the McNemar test and expressed as kappa. A total of 70 MTD examinations were performed on 65 patients (age 6 months to 16 years); 5 patients were scanned twice on different days. The majority had swallowed coins and button batteries (n = 41). Of these, 29 items were correctly identified, and 11 of 12 were correctly ruled out (coins and button batteries: sensitivity, 100% [95% Confidence Interval 95% to 100%]; specificity, 91.7% [95% CI 76% to 100%], kappa = 0.94). When all metallic objects were included, 41 of 46 were correctly identified, and 22 of 24 were correctly ruled out (sensitivity, 89.1% [95% CI 80% to 98%]; specificity, 91.7% [95% CI 81% to 100%], kappa = 0.78). Five miscellaneous objects were not identified (sensitivity for items other than coins and button batteries 71% [95% CI 49% to 92%], kappa = 0.56). Localization by MTD was correct in 30 of 41 identified objects (73%). The error rates of junior and senior pediatric surgery residents did not differ significantly (P =.82). Ingested coins and button batteries can be safely and accurately found by metal detector. For these indications, the MTD is a radiation-free diagnostic alternative to conventional radiographs. Other items, however, cannot be ruled out reliably by MTD. In these cases, radiographic imaging is still indicated.
Millar, Peter R; Balota, David A; Maddox, Geoffrey B; Duchek, Janet M; Aschenbrenner, Andrew J; Fagan, Anne M; Benzinger, Tammie L S; Morris, John C
2017-10-01
Recollection and familiarity are independent processes that contribute to memory performance. Recollection is dependent on attentional control, which has been shown to be disrupted in early stage Alzheimer's disease (AD), whereas familiarity is independent of attention. The present longitudinal study examines the sensitivity of recollection estimates based on Jacoby's (1991) process dissociation procedure to AD-related biomarkers in a large sample of well-characterized cognitively normal middle-aged and older adults (N = 519) and the extent to which recollection discriminates these individuals from individuals with very mild symptomatic AD (N = 64). Participants studied word pairs (e.g., knee bone), then completed a primed, explicit, cued fragment-completion memory task (e.g., knee b_n_). Primes were either congruent with the correct response (e.g., bone), incongruent (e.g., bend), or neutral (e.g., &). This design allowed for the estimation of independent contributions of recollection and familiarity processes, using the process dissociation procedure. Recollection, but not familiarity, was impaired in healthy aging and in very mild AD. Recollection discriminated cognitively normal individuals from the earliest detectable stage of symptomatic AD above and beyond standard psychometric tests. In cognitively normal individuals, baseline CSF measures indicative of AD pathology were related to lower initial recollection and less practice-related improvement in recollection over time. Finally, presence of amyloid plaques, as imaged by PIB-PET, was also related to less improvement in recollection over time. These findings suggest that attention-demanding memory processes, such as recollection, may be particularly sensitive to both symptomatic and preclinical AD pathology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Bolte, John F B
2016-09-01
Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and WiFi appliances. Further, the possible corrections for biases have not been fully applied. The main findings are that if the biases are not corrected for, the actual exposure will on average be underestimated. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
NASA Astrophysics Data System (ADS)
Alappattu, Denny P.; Wang, Qing; Yamaguchi, Ryan; Lind, Richard J.; Reynolds, Mike; Christman, Adam J.
2017-08-01
The sea surface temperature (SST) relevant to air-sea interaction studies is the temperature immediately adjacent to the air, referred to as skin SST. Generally, SST measurements from ships and buoys are taken at depths varies from several centimeters to 5 m below the surface. These measurements, known as bulk SST, can differ from skin SST up to O(1°C). Shipboard bulk and skin SST measurements were made during the Coupled Air-Sea Processes and Electromagnetic ducting Research east coast field campaign (CASPER-East). An Infrared SST Autonomous Radiometer (ISAR) recorded skin SST, while R/V Sharp's Surface Mapping System (SMS) provided bulk SST from 1 m water depth. Since the ISAR is sensitive to sea spray and rain, missing skin SST data occurred in these conditions. However, SMS measurement is less affected by adverse weather and provided continuous bulk SST measurements. It is desirable to correct the bulk SST to obtain a good representation of the skin SST, which is the objective of this research. Bulk-skin SST difference has been examined with respect to meteorological factors associated with cool skin and diurnal warm layers. Strong influences of wind speed, diurnal effects, and net longwave radiation flux on temperature difference are noticed. A three-step scheme is established to correct for wind effect, diurnal variability, and then for dependency on net longwave radiation flux. Scheme is tested and compared to existing correction schemes. This method is able to effectively compensate for multiple factors acting to modify bulk SST measurements over the range of conditions experienced during CASPER-East.
Geometric correction methods for Timepix based large area detectors
NASA Astrophysics Data System (ADS)
Zemlicka, J.; Dudak, J.; Karch, J.; Krejci, F.
2017-01-01
X-ray micro radiography with the hybrid pixel detectors provides versatile tool for the object inspection in various fields of science. It has proven itself especially suitable for the samples with low intrinsic attenuation contrast (e.g. soft tissue in biology, plastics in material sciences, thin paint layers in cultural heritage, etc.). The limited size of single Medipix type detector (1.96 cm2) was recently overcome by the construction of large area detectors WidePIX assembled of Timepix chips equipped with edgeless silicon sensors. The largest already built device consists of 100 chips and provides fully sensitive area of 14.3 × 14.3 cm2 without any physical gaps between sensors. The pixel resolution of this device is 2560 × 2560 pixels (6.5 Mpix). The unique modular detector layout requires special processing of acquired data to avoid occurring image distortions. It is necessary to use several geometric compensations after standard corrections methods typical for this type of pixel detectors (i.e. flat-field, beam hardening correction). The proposed geometric compensations cover both concept features and particular detector assembly misalignment of individual chip rows of large area detectors based on Timepix assemblies. The former deals with larger border pixels in individual edgeless sensors and their behaviour while the latter grapple with shifts, tilts and steps between detector rows. The real position of all pixels is defined in Cartesian coordinate system and together with non-binary reliability mask it is used for the final image interpolation. The results of geometric corrections for test wire phantoms and paleo botanic material are presented in this article.
Effects of blur and repeated testing on sensitivity estimates with frequency doubling perimetry.
Artes, Paul H; Nicolela, Marcelo T; McCormick, Terry A; LeBlanc, Raymond P; Chauhan, Balwantray C
2003-02-01
To investigate the effect of blur and repeated testing on sensitivity with frequency doubling technology (FDT) perimetry. One eye of 12 patients with glaucoma (mean deviation [MD] mean, -2.5 dB, range +0.5 to -4.3 dB) and 11 normal control subjects underwent six consecutive tests with the FDT N30 threshold program in each of two sessions. In session 1, blur was induced by trial lenses (-6.00, -3.00, 0.00, +3.00, and +6.00 D, in random order). In session 2, only the effects of repeated testing were evaluated. The MD and pattern standard deviation (PSD) indices were evaluated as functions of blur and of test order. By correcting the data of session 1 for the reduction of sensitivity with repeated testing (session 2), the effect of blur on FDT sensitivities was established, and its clinical consequences evaluated on total- and pattern-deviation probability maps. FDT sensitivities decreased with blur (by <0.5 dB/D) and with repeated testing (by approximately 2 dB between the first and sixth tests). Blur and repeated testing independently led to larger numbers of locations with significant total and pattern deviation. Sensitivity reductions were similar in normal control subjects and patients with glaucoma, at central and peripheral test locations and at locations with high and low sensitivities. However, patients with glaucoma showed larger deterioration in the total-deviation-probability maps. To optimize the performance of the device, refractive errors should be corrected and immediate retesting avoided. Further research is needed to establish the cause of sensitivity loss with repeated FDT testing.
Comparison of BiLinearly Interpolated Subpixel Sensitivity Mapping and Pixel-Level Decorrelation
NASA Astrophysics Data System (ADS)
Challener, Ryan C.; Harrington, Joseph; Cubillos, Patricio; Foster, Andrew S.; Deming, Drake; WASP Consortium
2016-10-01
Exoplanet eclipse signals are weaker than the systematics present in the Spitzer Space Telescope's Infrared Array Camera (IRAC), and thus the correction method can significantly impact a measurement. BiLinearly Interpolated Subpixel Sensitivity (BLISS) mapping calculates the sensitivity of the detector on a subpixel grid and corrects the photometry for any sensitivity variations. Pixel-Level Decorrelation (PLD) removes the sensitivity variations by considering the relative intensities of the pixels around the source. We applied both methods to WASP-29b, a Saturn-sized planet with a mass of 0.24 ± 0.02 Jupiter masses and a radius of 0.84 ± 0.06 Jupiter radii, which we observed during eclipse twice with the 3.6 µm and once with the 4.5 µm channels of IRAC aboard Spitzer in 2010 and 2011 (programs 60003 and 70084, respectively). We compared the results of BLISS and PLD, and comment on each method's ability to remove time-correlated noise. WASP-29b exhibits a strong detection at 3.6 µm and no detection at 4.5 µm. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
Monosodium glutamate-sensitive hypothalamic neurons contribute to the control of bone mass
NASA Technical Reports Server (NTRS)
Elefteriou, Florent; Takeda, Shu; Liu, Xiuyun; Armstrong, Dawna; Karsenty, Gerard
2003-01-01
Using chemical lesioning we previously identified hypothalamic neurons that are required for leptin antiosteogenic function. In the course of these studies we observed that destruction of neurons sensitive to monosodium glutamate (MSG) in arcuate nuclei did not affect bone mass. However MSG treatment leads to hypogonadism, a condition inducing bone loss. Therefore the normal bone mass of MSG-treated mice suggested that MSG-sensitive neurons may be implicated in the control of bone mass. To test this hypothesis we assessed bone resorption and bone formation parameters in MSG-treated mice. We show here that MSG-treated mice display the expected increase in bone resorption and that their normal bone mass is due to a concomitant increase in bone formation. Correction of MSG-induced hypogonadism by physiological doses of estradiol corrected the abnormal bone resorptive activity in MSG-treated mice and uncovered their high bone mass phenotype. Because neuropeptide Y (NPY) is highly expressed in MSG-sensitive neurons we tested whether NPY regulates bone formation. Surprisingly, NPY-deficient mice had a normal bone mass. This study reveals that distinct populations of hypothalamic neurons are involved in the control of bone mass and demonstrates that MSG-sensitive neurons control bone formation in a leptin-independent manner. It also indicates that NPY deficiency does not affect bone mass.
Torossi, T; Fan, J-Y; Sauter-Etter, K; Roth, J; Ziak, M
2006-08-01
Endomannosidase provides an alternate glucose-trimming pathway in the Golgi apparatus. However, it is unknown if the action of endomannosidase is dependent on the conformation of the substrate. We have investigated the processing by endomannosidase of the alpha1-antitrypsin oligosaccharides and its disease-causing misfolded Z and Hong Kong variants. Oligosaccharides of wild-type and misfolded alpha1-antitrypsin expressed in castanospermine-treated hepatocytes or glucosidase II-deficient Phar 2.7 cells were selectively processed by endomannosidase and subsequently converted to complex type oligosaccharides as indicated by Endo H resistance and PNGase F sensitivity. Overexpression of endomannosidase in castanospermine-treated hepatocytes resulted in processing of all oligosaccharides of wild-type and variants of alpha1-antitrypsin. Thus, endomannosidase does not discriminate the folding state of the substrate and provides a back-up mechanism for completion of N-glycosylation of endoplasmic reticulum-escaped glucosylated glycoproteins. For exported misfolded glycoproteins, this would provide a pathway for the formation of mature oligosaccharides important for their proper trafficking and correct functioning.
A novel approach of ensuring layout regularity correct by construction in advanced technologies
NASA Astrophysics Data System (ADS)
Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic
2017-03-01
In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.
Study and comparison of different sensitivity models for a two-plane Compton camera.
Muñoz, Enrique; Barrio, John; Bernabéu, José; Etxebeste, Ane; Lacasta, Carlos; Llosá, Gabriela; Ros, Ana; Roser, Jorge; Oliver, Josep F
2018-06-25
Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with 22 Na sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.
Britton, Jr., Charles L.; Wintenberg, Alan L.
1993-01-01
A radiation detection method and system for continuously correcting the quantization of detected charge during pulse pile-up conditions. Charge pulses from a radiation detector responsive to the energy of detected radiation events are converted to voltage pulses of predetermined shape whose peak amplitudes are proportional to the quantity of charge of each corresponding detected event by means of a charge-sensitive preamplifier. These peak amplitudes are sampled and stored sequentially in accordance with their respective times of occurrence. Based on the stored peak amplitudes and times of occurrence, a correction factor is generated which represents the fraction of a previous pulses influence on a preceding pulse peak amplitude. This correction factor is subtracted from the following pulse amplitude in a summing amplifier whose output then represents the corrected charge quantity measurement.
Ineichen, Christian; Sigrist, Hannes; Spinelli, Simona; Lesch, Klaus-Peter; Sautter, Eva; Seifritz, Erich; Pryce, Christopher R
2012-11-01
Valid animal models of psychopathology need to include behavioural readouts informed by human findings. In the probabilistic reversal learning (PRL) task, human subjects are confronted with serial reversal of the contingency between two operant stimuli and reward/punishment and, superimposed on this, a low probability (0.2) of punished correct responses/rewarded incorrect responses. In depression, reward-stay and reversals completed are unaffected but response-shift following punished correct response trials, referred to as negative feedback sensitivity (NFS), is increased. The aims of this study were to: establish an operant spatial PRL test appropriate for mice; obtain evidence for the processes mediating reward-stay and punishment-shift responding; and assess effects thereon of genetically- and pharmacologically-altered serotonin (5-HT) function. The study was conducted with wildtype (WT) and heterozygous mutant (HET) mice from a 5-HT transporter (5-HTT) null mutant strain. Mice were mildly food deprived and reward was sugar pellet and punishment was 5-s time out. Mice exhibited high motivation and adaptive reversal performance. Increased probability of punished correct response (PCR) trials per session (p = 0.1, 0.2 or 0.3) led to monotonic decrease in reward-stay and reversals completed, suggesting accurate reward prediction. NFS differed from chance-level at p PCR = 0.1, suggesting accurate punishment prediction, whereas NFS was at chance-level at p = 0.2-0.3. At p PCR = 0.1, HET mice exhibited lower NFS than WT mice. The 5-HTT blocker escitalopram was studied acutely at p PCR = 0.2: a low dose (0.5-1.5 mg/kg) resulted in decreased NFS, increased reward-stay and increased reversals completed, and similarly in WT and HET mice. This study demonstrates that testing PRL in mice can provide evidence on the regulation of reward and punishment processing that is, albeit within certain limits, of relevance to human emotional-cognitive processing, its dysfunction and treatment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Deficits in the pitch sensitivity of cochlear-implanted children speaking English or Mandarin
Deroche, Mickael L. D.; Lu, Hui-Ping; Limb, Charles J.; Lin, Yung-Song; Chatterjee, Monita
2014-01-01
Sensitivity to complex pitch is notoriously poor in adults with cochlear implants (CIs), but it is unclear whether this is true for children with CIs. Many are implanted today at a very young age, and factors related to brain plasticity (age at implantation, duration of CI experience, and speaking a tonal language) might have strong influences on pitch sensitivity. School-aged children participated, speaking English or Mandarin, having normal hearing (NH) or wearing a CI, using their clinically assigned settings with envelope-based coding strategies. Percent correct was measured in three-interval three-alternative forced choice tasks, for the discrimination of fundamental frequency (F0) of broadband harmonic complexes, and for the discrimination of sinusoidal amplitude modulation rate (AMR) of broadband noise, with reference frequencies at 100 and 200 Hz to focus on voice pitch processing. Data were fitted using a maximum-likelihood technique. CI children displayed higher thresholds and shallower slopes than NH children in F0 discrimination, regardless of linguistic background. Thresholds and slopes were more similar between NH and CI children in AMR discrimination. Once the effect of chronological age was extracted from the variance, the aforementioned factors related to brain plasticity did not contribute significantly to the CI children's sensitivity to pitch. Unless different strategies attempt to encode fine structure information, potential benefits of plasticity may be missed. PMID:25249932
Olafsen, Kåre S; Rønning, John A; Handegård, Bjørn Helge; Ulvund, Stein Erik; Dahl, Lauritz Bredrup; Kaaresen, Per Ivar
2012-02-01
Temperamental regulatory competence and social communication in term and preterm infants at 12 months corrected age was studied in a randomized controlled intervention trial aimed at enhancing maternal sensitive responsiveness. Surviving infants <2000 g from a geographically defined area were randomized to an intervention (71) or a control group (69), and compared with term infants (74). The intervention was a modified version of the "Mother-Infant Transaction Program". Regulatory competence was measured with the Infant Behavior Questionnaire, and social communication with the Early Social Communication Scales. Preterm intervention infants with low regulatory competence had higher responding to joint attention than preterm control infants. A sensitizing intervention may moderate the association between temperament and social communication, and thus allow an alternative functional outlet for preterm infants low in regulatory competence. The finding may have implications for conceptualizations of the role of early sensitizing interventions in promoting important developmental outcomes for premature infants. Copyright © 2011 Elsevier Inc. All rights reserved.
Takeyoshi, Masahiro; Iida, Kenji; Shiraishi, Keiji; Hoshuyama, Satsuki
2005-01-01
The murine local lymph node assay (LLNA) is currently recognized as a stand-alone sensitization test for determining the sensitizing potential of chemicals, and it has the advantage of yielding a quantitative endpoint that can be used to predict the sensitization potency of chemicals. The EC3 has been proposed as a parameter for classifying chemicals according to the sensitization potency. We previously developed a non-radioisotopic endpoint for the LLNA based on 5-bromo-2'-deoxyuridine (BrdU) incorporation (non-RI LLNA), and we are proposing a new procedure to predict the sensitization potency of chemicals based on comparisons with known human contact allergens. Nine chemicals (i.e. diphencyclopropenone, p-phenylenediamine, glutaraldehyde, cinnamicaldehyde, citral, eugenol, isopropyl myristate, propyleneglycol and hexane) categorized as human contact allergen classes 1-5 were tested by the non-RI LLNA with the following reference allergens: 2,4-dinitrochlorobenzene (DNCB) as a class 1 human contact allergen, isoeugenol as a class 2 human contact allergen and alpha-hexylcinnamic aldehyde (HCA) as a class 3 human contact allergen. Consequently, nine test chemicals were almost assigned to their correct allergen class. The results suggested that the new procedure for non-RI LLNA can provide correct sensitization potency data. Sensitization potency data are useful for evaluating the sensitization risk to humans of exposure to new chemical products. Accordingly, this approach would be an effective modification of LLNA with regard to its experimental design. Moreover, this procedure can be applied also to the standard LLNA with radioisotopes and to other modifications of the LLNA. Copyright 2005 John Wiley & Sons, Ltd.
Roberts, David W; Patlewicz, Grace
2018-01-01
There is an expectation that to meet regulatory requirements, and avoid or minimize animal testing, integrated approaches to testing and assessment will be needed that rely on assays representing key events (KEs) in the skin sensitization adverse outcome pathway. Three non-animal assays have been formally validated and regulatory adopted: the direct peptide reactivity assay (DPRA), the KeratinoSens™ assay and the human cell line activation test (h-CLAT). There have been many efforts to develop integrated approaches to testing and assessment with the "two out of three" approach attracting much attention. Here a set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the three individual non-animal assays, their binary combinations and the "two out of three" approach in predicting skin sensitization potential. The most predictive approach was to use both the DPRA and h-CLAT as follows: (1) perform DPRA - if positive, classify as sensitizing, and (2) if negative, perform h-CLAT - a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 85% (local lymph node assay) and 93% (human) of non-sensitizer predictions were correct, whereas the "two out of three" approach had 69% (local lymph node assay) and 79% (human) of non-sensitizer predictions correct. The findings are consistent with the argument, supported by published quantitative mechanistic models that only the first KE needs to be modeled. All three assays model this KE to an extent. The value of using more than one assay depends on how the different assays compensate for each other's technical limitations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Burns, Douglas A.; McHale, M.R.; Driscoll, C.T.; Roy, K.M.
2006-01-01
In light of recent reductions in sulphur (S) and nitrogen (N) emissions mandated by Title IV of the Clean Air Act Amendments of 1990, temporal trends and trend coherence in precipitation (1984-2001 and 1992-2001) and surface water chemistry (1992-2001) were determined in two of the most acid-sensitive regions of North America, i.e. the Catskill and Adirondack Mountains of New York. Precipitation chemistry data from six sites located near these regions showed decreasing sulphate (SO42-), nitrate (NO3-), and base cation (CB) concentrations and increasing pH during 1984-2001, but few significant trends during 1992-2001. Data from five Catskill streams and 12 Adirondack lakes showed decreasing trends in SO42- concentrations at all sites, and decreasing trends in NO3-, CB, and H+ concentrations and increasing trends in dissolved organic carbon at most sites. In contrast, acid-neutralizing capacity (ANC increased significantly at only about half the Adirondack lakes and in one of the Catskill streams. Flow correction prior to trend analysis did not change any trend directions and had little effect on SO42- trends, but it caused several significant non-flow-corrected trends in NO3- and ANC to become non-significant, suggesting that trend results for flow-sensitive constituents are affected by flow-related climate variation. SO42- concentrations showed high temporal coherence in precipitation, surface waters, and in precipitation-surface water comparisons, reflecting a strong link between S emissions, precipitation SO42- concentrations, and the processes that affect S cycling within these regions. NO3- and H+ concentrations and ANC generally showed weak coherence, especially in surface waters and in precipitation-surface water comparisons, indicating that variation in local-scale processes driven by factors such as climate are affecting trends in acid-base chemistry in these two regions. Copyright ?? 2005 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Beaudoin, Yanick; Desbiens, André; Gagnon, Eric; Landry, René
2018-01-01
The navigation system of a satellite launcher is of paramount importance. In order to correct the trajectory of the launcher, the position, velocity and attitude must be known with the best possible precision. In this paper, the observability of four navigation solutions is investigated. The first one is the INS/GPS couple. Then, attitude reference sensors, such as magnetometers, are added to the INS/GPS solution. The authors have already demonstrated that the reference trajectory could be used to improve the navigation performance. This approach is added to the two previously mentioned navigation systems. For each navigation solution, the observability is analyzed with different sensor error models. First, sensor biases are neglected. Then, sensor biases are modelled as random walks and as first order Markov processes. The observability is tested with the rank and condition number of the observability matrix, the time evolution of the covariance matrix and sensitivity to measurement outlier tests. The covariance matrix is exploited to evaluate the correlation between states in order to detect structural unobservability problems. Finally, when an unobservable subspace is detected, the result is verified with theoretical analysis of the navigation equations. The results show that evaluating only the observability of a model does not guarantee the ability of the aiding sensors to correct the INS estimates within the mission time. The analysis of the covariance matrix time evolution could be a powerful tool to detect this situation, however in some cases, the problem is only revealed with a sensitivity to measurement outlier test. None of the tested solutions provide GPS position bias observability. For the considered mission, the modelling of the sensor biases as random walks or Markov processes gives equivalent results. Relying on the reference trajectory can improve the precision of the roll estimates. But, in the context of a satellite launcher, the roll estimation error and gyroscope bias are only observable if attitude reference sensors are present.
Corrective response times in a coordinated eye-head-arm countermanding task.
Tao, Gordon; Khan, Aarlenne Z; Blohm, Gunnar
2018-06-01
Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.
An equation of state for partially ionized plasmas: The Coulomb contribution to the free energy
NASA Astrophysics Data System (ADS)
Kilcrease, D. P.; Colgan, J.; Hakel, P.; Fontes, C. J.; Sherrill, M. E.
2015-09-01
We have previously developed an equation of state (EOS) model called ChemEOS (Hakel and Kilcrease, Atomic Processes in Plasmas, Eds., J. Cohen et al., AIP, 2004) for a plasma of interacting ions, atoms and electrons. It is based on a chemical picture of the plasma and is derived from an expression for the Helmholtz free energy of the interacting species. All other equilibrium thermodynamic quantities are then obtained by minimizing this free energy subject to constraints, thus leading to a thermodynamically consistent EOS. The contribution to this free energy from the Coulomb interactions among the particles is treated using the method of Chabrier and Potekhin (Phys. Rev. E 58, 4941 (1998)) which we have adapted for partially ionized plasmas. This treatment is further examined and is found to give rise to unphysical behavior for various elements at certain values of the density and temperature where the Coulomb coupling begins to become significant and the atoms are partially ionized. We examine the source of this unphysical behavior and suggest corrections that produce acceptable results. The sensitivity of the thermodynamic properties and frequency-dependent opacity of iron is examined with and without these corrections. The corrected EOS is used to determine the fractional ion populations and level populations for a new generation of OPLIB low-Z opacity tables currently being prepared at Los Alamos National Laboratory with the ATOMIC code.
1994-12-01
be INTRODUCTION familiar: best value source selection, processes and metrics In simplified terms, acquisition and continuous improvement ; of a training ...pro- continuous improvement , MIL-STD- posed processes and metrics are 1379D, the systems approach to placed in the contract in a training , concurrent...identification and 5 Continuous Process Improvement correction of errors are critical to software product 6 Training correctness and quality. Correcting
Chaudhry, Waseem; Hussain, Nasir; Ahlberg, Alan W; Croft, Lori B; Fernandez, Antonio B; Parker, Mathew W; Swales, Heather H; Slomka, Piotr J; Henzlova, Milena J; Duvall, W Lane
2017-06-01
A stress-first myocardial perfusion imaging (MPI) protocol saves time, is cost effective, and decreases radiation exposure. A limitation of this protocol is the requirement for physician review of the stress images to determine the need for rest images. This hurdle could be eliminated if an experienced technologist and/or automated computer quantification could make this determination. Images from consecutive patients who were undergoing a stress-first MPI with attenuation correction at two tertiary care medical centers were prospectively reviewed independently by a technologist and cardiologist blinded to clinical and stress test data. Their decision on the need for rest imaging along with automated computer quantification of perfusion results was compared with the clinical reference standard of an assessment of perfusion images by a board-certified nuclear cardiologist that included clinical and stress test data. A total of 250 patients (mean age 61 years and 55% female) who underwent a stress-first MPI were studied. According to the clinical reference standard, 42 (16.8%) and 208 (83.2%) stress-first images were interpreted as "needing" and "not needing" rest images, respectively. The technologists correctly classified 229 (91.6%) stress-first images as either "needing" (n = 28) or "not needing" (n = 201) rest images. Their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 66.7%, 96.6%, 80.0%, and 93.5%, respectively. An automated stress TPD score ≥1.2 was associated with optimal sensitivity and specificity and correctly classified 179 (71.6%) stress-first images as either "needing" (n = 31) or "not needing" (n = 148) rest images. Its sensitivity, specificity, PPV, and NPV were 73.8%, 71.2%, 34.1%, and 93.1%, respectively. In a model whereby the computer or technologist could correct for the other's incorrect classification, 242 (96.8%) stress-first images were correctly classified. The composite sensitivity, specificity, PPV, and NPV were 83.3%, 99.5%, 97.2%, and 96.7%, respectively. Technologists and automated quantification software had a high degree of agreement with the clinical reference standard for determining the need for rest images in a stress-first imaging protocol. Utilizing an experienced technologist and automated systems to screen stress-first images could expand the use of stress-first MPI to sites where the cardiologist is not immediately available for interpretation.
OPC care-area feedforwarding to MPC
NASA Astrophysics Data System (ADS)
Dillon, Brian; Peng, Yi-Hsing; Hamaji, Masakazu; Tsunoda, Dai; Muramatsu, Tomoyuki; Ohara, Shuichiro; Zou, Yi; Arnoux, Vincent; Baron, Stanislas; Zhang, Xiaolong
2016-10-01
Demand for mask process correction (MPC) is growing for leading-edge process nodes. MPC was originally intended to correct CD linearity for narrow assist features difficult to resolve on a photomask without any correction, but it has been extended to main features as process nodes have been shrinking. As past papers have observed, MPC shows improvements in photomask fidelity. Using advanced shape and dose corrections could give more improvements, especially at line-ends and corners. However, there is a dilemma on using such advanced corrections on full mask level because it increases data volume and run time. In addition, write time on variable shaped beam (VSB) writers also increases as the number of shots increases. Optical proximity correction (OPC) care-area defines circuit design locations that require high mask fidelity under mask writing process variations such as energy fluctuation. It is useful for MPC to switch its correction strategy and permit the use of advanced mask correction techniques in those local care-areas where they provide maximum wafer benefits. The use of mask correction techniques tailored to localized post-OPC design can result in similar desired level of data volume, run time, and write time. ASML Brion and NCS have jointly developed a method to feedforward the care-area information from Tachyon LMC to NDE-MPC to provide real benefit for improving both mask writing and wafer printing quality. This paper explains the detail of OPC care-area feedforwarding to MPC between ASML Brion and NCS, and shows the results. In addition, improvements on mask and wafer simulations are also shown. The results indicate that the worst process variation (PV) bands are reduced up to 37% for a 10nm tech node metal case.
31 CFR 375.23 - How does the securities delivery process work?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false How does the securities delivery process work? 375.23 Section 375.23 Money and Finance: Treasury Regulations Relating to Money and Finance... transfer the correct book-entry Treasury securities in the correct par amount against the correct...
31 CFR 375.23 - How does the securities delivery process work?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false How does the securities delivery process work? 375.23 Section 375.23 Money and Finance: Treasury Regulations Relating to Money and Finance... transfer the correct book-entry Treasury securities in the correct par amount against the correct...
31 CFR 375.23 - How does the securities delivery process work?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false How does the securities delivery process work? 375.23 Section 375.23 Money and Finance: Treasury Regulations Relating to Money and Finance... transfer the correct book-entry Treasury securities in the correct par amount against the correct...
31 CFR 375.23 - How does the securities delivery process work?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false How does the securities delivery process work? 375.23 Section 375.23 Money and Finance: Treasury Regulations Relating to Money and Finance... transfer the correct book-entry Treasury securities in the correct par amount against the correct...
Identifying QT prolongation from ECG impressions using a general-purpose Natural Language Processor
Denny, Joshua C.; Miller, Randolph A.; Waitman, Lemuel Russell; Arrieta, Mark; Peterson, Joshua F.
2009-01-01
Objective Typically detected via electrocardiograms (ECGs), QT interval prolongation is a known risk factor for sudden cardiac death. Since medications can promote or exacerbate the condition, detection of QT interval prolongation is important for clinical decision support. We investigated the accuracy of natural language processing (NLP) for identifying QT prolongation from cardiologist-generated, free-text ECG impressions compared to corrected QT (QTc) thresholds reported by ECG machines. Methods After integrating negation detection to a locally-developed natural language processor, the KnowledgeMap concept identifier, we evaluated NLP-based detection of QT prolongation compared to the calculated QTc on a set of 44,318 ECGs obtained from hospitalized patients. We also created a string query using regular expressions to identify QT prolongation. We calculated sensitivity and specificity of the methods using manual physician review of the cardiologist-generated reports as the gold standard. To investigate causes of “false positive” calculated QTc, we manually reviewed randomly selected ECGs with a long calculated QTc but no mention of QT prolongation. Separately, we validated the performance of the negation detection algorithm on 5,000 manually-categorized ECG phrases for any medical concept (not limited to QT prolongation) prior to developing the NLP query for QT prolongation. Results The NLP query for QT prolongation correctly identified 2,364 of 2,373 ECGs with QT prolongation with a sensitivity of 0.996 and a positive predictive value of 1.000. There were no false positives. The regular expression query had a sensitivity of 0.999 and positive predictive value of 0.982. In contrast, the positive predictive value of common QTc thresholds derived from ECG machines was 0.07–0.25 with corresponding sensitivities of 0.994–0.046. The negation detection algorithm had a recall of 0.973 and precision of 0.982 for 10,490 concepts found within ECG impressions. Conclusions NLP and regular expression queries of cardiologists’ ECG interpretations can more effectively identify QT prolongation than the automated QTc intervals reported by ECG machines. Future clinical decision support could employ NLP queries to detect QTc prolongation and other reported ECG abnormalities. PMID:18938105
Goupell, Matthew J
2015-03-01
Bilateral cochlear implant (CI) listeners can perform binaural tasks, but they are typically worse than normal-hearing (NH) listeners. To understand why this difference occurs and the mechanisms involved in processing dynamic binaural differences, interaural envelope correlation change discrimination sensitivity was measured in real and simulated CI users. In experiment 1, 11 CI (eight late deafened, three early deafened) and eight NH listeners were tested in an envelope correlation change discrimination task. Just noticeable differences (JNDs) were best for a matched place-of-stimulation and increased for an increasing mismatch. In experiment 2, attempts at intracranially centering stimuli did not produce lower JNDs. In experiment 3, the percentage of correct identifications of antiphasic carrier pulse trains modulated by correlated envelopes was measured as a function of mismatch and pulse rate. Sensitivity decreased for increasing mismatch and increasing pulse rate. The experiments led to two conclusions. First, envelope correlation change discrimination necessitates place-of-stimulation matched inputs. However, it is unclear if previous experience with acoustic hearing is necessary for envelope correlation change discrimination. Second, NH listeners presented with CI simulations demonstrated better performance than real CI listeners. If the simulations are realistic representations of electrical stimuli, real CI listeners appear to have difficulty processing interaural information in modulated signals.
A novel method of forceps biopsy improves the diagnosis of proximal biliary malignancies.
Kulaksiz, Hasan; Strnad, Pavel; Römpp, Achim; von Figura, Guido; Barth, Thomas; Esposito, Irene; Schirmacher, Peter; Henne-Bruns, Doris; Adler, Guido; Stiehl, Adolf
2011-02-01
Tissue specimen collection represents a cornerstone in diagnosis of proximal biliary tract malignancies offering great specificity, but only limited sensitivity. To improve the tumor detection rate, we developed a new method of forceps biopsy and compared it prospectively with endoscopic transpapillary brush cytology. 43 patients with proximal biliary stenoses, which were suspect for malignancy, undergoing endoscopic retrograde cholangiography were prospectively recruited and subjected to both biopsy [using a double-balloon enteroscopy (DBE) forceps under a guidance of a pusher and guiding catheter with guidewire] and transpapillary brush cytology. The cytological/histological findings were compared with the final clinical diagnosis. 35 out of 43 patients had a malignant disease (33 cholangiocarcinomas, 1 hepatocellular carcinoma, 1 gallbladder carcinoma). The sensitivity of cytology and biopsy in these patients was 49 and 69%, respectively. The method with DBE forceps allowed a pinpoint biopsy of the biliary stenoses. Both methods had 100% specificity, and, when combined, 80% of malignant processes were detected. All patients with non-malignant conditions were correctly assigned by both methods. No clinically relevant complications were observed. The combination of forceps biopsy and transpapillary brush cytology is safe and offers superior detection rates compared to both methods alone, and therefore represents a promising approach in evaluation of proximal biliary tract processes.
Correction of ultrasonic wave aberration with a time delay and amplitude filter.
Måsøy, Svein-Erik; Johansen, Tonni F; Angelsen, Bjørn
2003-04-01
Two-dimensional simulations with propagation through two different heterogeneous human body wall models have been performed to analyze different correction filters for ultrasonic wave aberration due to forward wave propagation. The different models each produce most of the characteristic aberration effects such as phase aberration, relatively strong amplitude aberration, and waveform deformation. Simulations of wave propagation from a point source in the focus (60 mm) of a 20 mm transducer through the body wall models were performed. Center frequency of the pulse was 2.5 MHz. Corrections of the aberrations introduced by the two body wall models were evaluated with reference to the corrections obtained with the optimal filter: a generalized frequency-dependent phase and amplitude correction filter [Angelsen, Ultrasonic Imaging (Emantec, Norway, 2000), Vol. II]. Two correction filters were applied, a time delay filter, and a time delay and amplitude filter. Results showed that correction with a time delay filter produced substantial reduction of the aberration in both cases. A time delay and amplitude correction filter performed even better in both cases, and gave correction close to the ideal situation (no aberration). The results also indicated that the effect of the correction was very sensitive to the accuracy of the arrival time fluctuations estimate, i.e., the time delay correction filter.
High Sensitive Precise 3D Accelerometer for Solar System Exploration with Unmanned Spacecrafts
NASA Astrophysics Data System (ADS)
Savenko, Y. V.; Demyanenko, P. O.; Zinkovskiy, Y. F.
Solutions of several space and geophysical tasks require creating high sensitive precise accelerometers with sensitivity in order of 10 -13 g. These several tasks are following: inertial navigation of the Earth and Space; gravimetry nearby the Earth and into Space; geology; geophysics; seismology etc. Accelerometers (gravimeters and gradientmeters) with required sensitivity are not available now. The best accelerometers in the world have sensitivity worth on 4-5 orders. It has been developed a new class of fiber-optical sensors (FOS) with light pulse modulation. These sensors have super high threshold sensitivity and wide (up to 10 orders) dynamic range, and can be used as a base for creating of measurement units of physical values as 3D superhigh sensitive precise accelerometers of linear accelerations that is suitable for highest requirements. The principle of operation of the FOS is organically combined with a digital signal processing. It allows decreasing hardware of the accelerometer due to using a usual air-borne or space-borne computer; correcting the influence of natural, design, technological drawbacks of FOS on measured results; neutralising the influence of extraordinary situations available during using of FOS; decreasing the influence of internal and external destabilising factors (as for FOS), such as oscillation of environment temperature, instability of pendulum cycle frequency of sensitive element of the accelerometer etc. We were conducted a quantitative estimation of precise opportunities of analogue FOS in structure of fiber optical measuring devices (FOMD) for elementary FOMD with analogue FOS built on modern element basis of fiber optics (FO), at following assumptions: absolute parameter stability of devices of FOS measuring path; single transmission band of registration path; maximum possible inserted in optical fiber (OF) a radiated power. Even at such idealized assumptions, a calculated value in limit reached minimum inaccuracy of measuring, by analogue FOS, has been ˜ 10-4 %. Substantially accessible values are yet worse on 2-3 order. The reason of poor precise performances of measurers on the basis of analogue FOS is metrologically poor quality of a stream of optical radiation carrying out role of the carrier and receptor of the information. It is a high level of photon noise and a small blanket intensity level. First reason reflects the fact of discreteness of flow of high-energy photons, and it is consequence of second one - smallness, on absolute value, of inserted power into OF from available radiation sources (RS). Works on improvement of FO elements are carrying out. Certainly, it will be created RS allow to insert enough of power into standard OF. But simple increasing of optical flow power in measuring path of FOS will not be able to decide radically the problem of increasing of measuring prices: with raising of power in proportion of square root of its value there is raising a power of photon noises - 1000-times increase of power promises only 30-times increase of measuring precise; insertion into OF more large power (˜ 1 W for standard silicon OF) causes an appearance of non-linear effects in it, which destroying an operating principle of analogue FOS. Thus, it is needed to constatate impossibility of building, at that time, measurers of analogue FOS, concurated with traditional (electrical) measurers on measuring precise. At that all, advantages of FO, as basis of building of FO MD requires to find ways for decision of these problems. Analysis of problem of sensitivity of usual (analogue) FOS has brought us to conclusion about necessity of reviewing of principles of information signal forming in FOS and principles its next electronic processing. For radical increasing of accuracy of measurements with using FOS it is necessary to refuse analogue modulation of optical flow and to transfer to discreet its modulations, entering thus in optical flow new, non-optical, parameters, which will serve as recipients of the information. It allows to save up all advantages of FOS (carrier of information, as earlier, remains an optical flow), but problem of accuracy of measurements now will not be more connected with problem of measurement of low power intensity of optical flow - it is transferred from area of optical measurements in other, non-optical area, where there is no this problem, or it had been solved duly. It had been developed a new class of FOS with pulse modulation of radiation flow intensity at the Department of Design and Production of Redioelectronic Systems of National Technical University of Ukraine ``Kiev Polytechnic Institute''. PFOS have benefit differ from usual analogue FOS on high threshold sensitivity and wide dynamic range of measured values. As example there are described design and performances of proposed 3D accelerometer. High precision of accelerometer measurements on PFOS is provided by following: possibility of high precision measurements of time intervals, which serve as informative parameters in output pulse signal of PFOS; possibility of creating a high quality quartz oscillating system, which serves as sensitive element of PFOS; insensitiveness of metrological performances of the accelerometer to any parameter instabilities (time, temperature, etc.) of optical and electrical elements in measuring path of PFOS; digital processing of PFOS signal practically excludes processing errors; principle insensitiveness of PFOS to electromagnetic noises of any nature and any intensity; possibility of direct correction of measuring results, during their processing, for taking into account and excluding undesirable influences of any destabilizing factors are acting on PFOS. Quasi stationary approach The developed 3D accelerometer on PFOS of extra low accelerations has unique technical performances, that confirms our conclusions about potentially high metrological abilities of pulse FOS. It has the following performances (calculated): threshold sensitivity is (10 -9 ldots 10 -13) g (threshold is determine by customer with determination of sizes of sensor and electronic processing unit); dynamic range is 10 7 ldots 10 9 ; frequency range is 0 ldots 10 Hz; mass is 50 grams; size: length is 120 mm and diameter is 20 mm In addition, that it can be used as accelerometer properly, on its base it is possible to create the strapdown inertial systems (SIS) for spacecraft. Flight control is carried out in accordance to flight programe of spacecraft without support connection with external reference objects. These SIS allow: - direct control over changes of orbital parameter or flight track, caused by action of extra low but long time external force factors (braking action of planet atmosphere remains, sun wind pressure, etc.) on spacecraft; - checking correction of orbital parameters (spacecraft track) by including of low power spaceborne engine; The developed accelerometer can be also used as high sensitive gravimeter for geophysical investigations and geological explorations - anywhere, where it is required to measure extra low deviation of terrestrial gravity value. High sensitivity of described accelerometers allows to create, on its base, gradientometers of real system for investigation of Planet gravity field heterogeneity from spacecraft orbit. This opens possibilities of practical solution of number important tasks of Planet physics.
Improved determination of particulate absorption from combined filter pad and PSICAM measurements.
Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David
2016-10-31
Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.
Satellite-based emission constraint for nitrogen oxides: Capability and uncertainty
NASA Astrophysics Data System (ADS)
Lin, J.; McElroy, M. B.; Boersma, F.; Nielsen, C.; Zhao, Y.; Lei, Y.; Liu, Y.; Zhang, Q.; Liu, Z.; Liu, H.; Mao, J.; Zhuang, G.; Roozendael, M.; Martin, R.; Wang, P.; Spurr, R. J.; Sneep, M.; Stammes, P.; Clemer, K.; Irie, H.
2013-12-01
Vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) retrieved from satellite remote sensing have been employed widely to constrain emissions of nitrogen oxides (NOx). A major strength of satellite-based emission constraint is analysis of emission trends and variability, while a crucial limitation is errors both in satellite NO2 data and in model simulations relating NOx emissions to NO2 columns. Through a series of studies, we have explored these aspects over China. We separate anthropogenic from natural sources of NOx by exploiting their different seasonality. We infer trends of NOx emissions in recent years and effects of a variety of socioeconomic events at different spatiotemporal scales including the general economic growth, global financial crisis, Chinese New Year, and Beijing Olympics. We further investigate the impact of growing NOx emissions on particulate matter (PM) pollution in China. As part of recent developments, we identify and correct errors in both satellite NO2 retrieval and model simulation that ultimately affect NOx emission constraint. We improve the treatments of aerosol optical effects, clouds and surface reflectance in the NO2 retrieval process, using as reference ground-based MAX-DOAS measurements to evaluate the improved retrieval results. We analyze the sensitivity of simulated NO2 to errors in the model representation of major meteorological and chemical processes with a subsequent correction of model bias. Future studies will implement these improvements to re-constrain NOx emissions.
Soils Project Risk-Based Corrective Action Evaluation Process with ROTC 1 and ROTC 2, Revision 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Patrick; Sloop, Christina
2012-04-01
This document formally defines and clarifies the NDEP-approved process the NNSA/NSO Soils Activity uses to fulfill the requirements of the FFACO and state regulations. This process is used to establish FALs in accordance with the risk-based corrective action (RBCA) process stipulated in Chapter 445 of the Nevada Administrative Code (NAC) as described in the ASTM International (ASTM) Method E1739-95 (NAC, 2008; ASTM, 1995). It is designed to provide a set of consistent standards for chemical and radiological corrective actions.
Induced maturation of human immunodeficiency virus.
Mattei, Simone; Anders, Maria; Konvalinka, Jan; Kräusslich, Hans-Georg; Briggs, John A G; Müller, Barbara
2014-12-01
HIV-1 assembles at the plasma membrane of virus-producing cells as an immature, noninfectious particle. Processing of the Gag and Gag-Pol polyproteins by the viral protease (PR) activates the viral enzymes and results in dramatic structural rearrangements within the virion--termed maturation--that are a prerequisite for infectivity. Despite its fundamental importance for viral replication, little is currently known about the regulation of proteolysis and about the dynamics and structural intermediates of maturation. This is due mainly to the fact that HIV-1 release and maturation occur asynchronously both at the level of individual cells and at the level of particle release from a single cell. Here, we report a method to synchronize HIV-1 proteolysis in vitro based on protease inhibitor (PI) washout from purified immature virions, thereby temporally uncoupling virus assembly and maturation. Drug washout resulted in the induction of proteolysis with cleavage efficiencies correlating with the off-rate of the respective PR-PI complex. Proteolysis of Gag was nearly complete and yielded the correct products with an optimal half-life (t(1/2)) of ~5 h, but viral infectivity was not recovered. Failure to gain infectivity following PI washout may be explained by the observed formation of aberrant viral capsids and/or by pronounced defects in processing of the reverse transcriptase (RT) heterodimer associated with a lack of RT activity. Based on our results, we hypothesize that both the polyprotein processing dynamics and the tight temporal coupling of immature particle assembly and PR activation are essential for correct polyprotein processing and morphological maturation and thus for HIV-1 infectivity. Cleavage of the Gag and Gag-Pol HIV-1 polyproteins into their functional subunits by the viral protease activates the viral enzymes and causes major structural rearrangements essential for HIV-1 infectivity. This proteolytic maturation occurs concomitant with virus release, and investigation of its dynamics is hampered by the fact that virus populations in tissue culture contain particles at all stages of assembly and maturation. Here, we developed an inhibitor washout strategy to synchronize activation of protease in wild-type virus. We demonstrated that nearly complete Gag processing and resolution of the immature virus architecture are accomplished under optimized conditions. Nevertheless, most of the resulting particles displayed irregular morphologies, Gag-Pol processing was not faithfully reconstituted, and infectivity was not recovered. These data show that HIV-1 maturation is sensitive to the dynamics of processing and also that a tight temporal link between virus assembly and PR activation is required for correct polyprotein processing. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
NASA Astrophysics Data System (ADS)
Park, Sang Seo; Jung, Yeonjin; Lee, Yun Gon
2016-07-01
Radiative transfer model simulations were used to investigate the erythemal ultraviolet (EUV) correction factors by separating the UV-A and UV-B spectral ranges. The correction factor was defined as the ratio of EUV caused by changing the amounts and characteristics of the extinction and scattering materials. The EUV correction factors (CFEUV) for UV-A [CFEUV(A)] and UV-B [CFEUV(B)] were affected by changes in the total ozone, optical depths of aerosol and cloud, and the solar zenith angle. The differences between CFEUV(A) and CFEUV(B) were also estimated as a function of solar zenith angle, the optical depths of aerosol and cloud, and total ozone. The differences between CFEUV(A) and CFEUV(B) ranged from -5.0% to 25.0% for aerosols, and from -9.5% to 2.0% for clouds in all simulations for different solar zenith angles and optical depths of aerosol and cloud. The rate of decline of CFEUV per unit optical depth between UV-A and UV-B differed by up to 20% for the same aerosol and cloud conditions. For total ozone, the variation in CFEUV(A) was negligible compared with that in CFEUV(B) because of the effective spectral range of the ozone absorption band. In addition, the sensitivity of the CFEUVs due to changes in surface conditions (i.e., surface albedo and surface altitude) was also estimated by using the model in this study. For changes in surface albedo, the sensitivity of the CFEUVs was 2.9%-4.1% per 0.1 albedo change, depending on the amount of aerosols or clouds. For changes in surface altitude, the sensitivity of CFEUV(B) was twice that of CFEUV(A), because the Rayleigh optical depth increased significantly at shorter wavelengths.
ERIC Educational Resources Information Center
Werfel, Krystal L.; Krimm, Hannah
2015-01-01
The purpose of this study was to examine the utility of the Spelling Sensitivity Score (SSS) beyond percentage correct scoring in analyzing the spellings of children with specific language impairment (SLI). Participants were 31 children with SLI and 28 children with typical language in grades 2-4. Spellings of individual words were scored using…
Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.
Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu
2017-06-30
For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.
An overview of the thematic mapper geometric correction system
NASA Technical Reports Server (NTRS)
Beyer, E. P.
1983-01-01
Geometric accuracy specifications for LANDSAT 4 are reviewed and the processing concepts which form the basis of NASA's thematic mapper geometric correction system are summarized for both the flight and ground segments. The flight segment includes the thematic mapper instrument, attitude measurement devices, attitude control, and ephemeris processing. For geometric correction the ground segment uses mirror scan correction data, payload correction data, and control point information to determine where TM detector samples fall on output map projection systems. Then the raw imagery is reformatted and resampled to produce image samples on a selected output projection grid system.
Improve homology search sensitivity of PacBio data by correcting frameshifts.
Du, Nan; Sun, Yanni
2016-09-01
Single-molecule, real-time sequencing (SMRT) developed by Pacific BioSciences produces longer reads than secondary generation sequencing technologies such as Illumina. The long read length enables PacBio sequencing to close gaps in genome assembly, reveal structural variations, and identify gene isoforms with higher accuracy in transcriptomic sequencing. However, PacBio data has high sequencing error rate and most of the errors are insertion or deletion errors. During alignment-based homology search, insertion or deletion errors in genes will cause frameshifts and may only lead to marginal alignment scores and short alignments. As a result, it is hard to distinguish true alignments from random alignments and the ambiguity will incur errors in structural and functional annotation. Existing frameshift correction tools are designed for data with much lower error rate and are not optimized for PacBio data. As an increasing number of groups are using SMRT, there is an urgent need for dedicated homology search tools for PacBio data. In this work, we introduce Frame-Pro, a profile homology search tool for PacBio reads. Our tool corrects sequencing errors and also outputs the profile alignments of the corrected sequences against characterized protein families. We applied our tool to both simulated and real PacBio data. The results showed that our method enables more sensitive homology search, especially for PacBio data sets of low sequencing coverage. In addition, we can correct more errors when comparing with a popular error correction tool that does not rely on hybrid sequencing. The source code is freely available at https://sourceforge.net/projects/frame-pro/ yannisun@msu.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Rapid diagnostic tests for malaria at sites of varying transmission intensity in Uganda.
Hopkins, Heidi; Bebell, Lisa; Kambale, Wilson; Dokomajilar, Christian; Rosenthal, Philip J; Dorsey, Grant
2008-02-15
In Africa, fever is often treated presumptively as malaria, resulting in misdiagnosis and the overuse of antimalarial drugs. Rapid diagnostic tests (RDTs) for malaria may allow improved fever management. We compared RDTs based on histidine-rich protein 2 (HRP2) and RDTs based on Plasmodium lactate dehydrogenase (pLDH) with expert microscopy and PCR-corrected microscopy for 7000 patients at sites of varying malaria transmission intensity across Uganda. When all sites were considered, the sensitivity of the HRP2-based test was 97% when compared with microscopy and 98% when corrected by PCR; the sensitivity of the pLDH-based test was 88% when compared with microscopy and 77% when corrected by PCR. The specificity of the HRP2-based test was 71% when compared with microscopy and 88% when corrected by PCR; the specificity of the pLDH-based test was 92% when compared with microscopy and >98% when corrected by PCR. Based on Plasmodium falciparum PCR-corrected microscopy, the positive predictive value (PPV) of the HRP2-based test was high (93%) at all but the site with the lowest transmission rate; the pLDH-based test and expert microscopy offered excellent PPVs (98%) for all sites. The negative predictive value (NPV) of the HRP2-based test was consistently high (>97%); in contrast, the NPV for the pLDH-based test dropped significantly (from 98% to 66%) as transmission intensity increased, and the NPV for expert microscopy decreased significantly (99% to 54%) because of increasing failure to detect subpatent parasitemia. Based on the high PPV and NPV, HRP2-based RDTs are likely to be the best diagnostic choice for areas with medium-to-high malaria transmission rates in Africa.
Adib-Moghaddam, Soheil; Soleyman-Jahi, Saeed; Salmanian, Bahram; Omidvari, Amir-Houshang; Adili-Aghdam, Fatemeh; Noorizadeh, Farsad; Eslani, Medi
2016-11-01
To evaluate the long-term quantitative and qualitative optical outcomes of 1-step transepithelial photorefractive keratectomy (PRK) to correct myopia and astigmatism. Bina Eye Hospital, Tehran, Iran. Prospective interventional case series. Eyes with myopia with or without astigmatism were evaluated. One-step transepithelial PRK was performed with an aberration-free aspheric optimized profile and the Amaris 500 laser. Eighteen-month follow-up results for refraction, visual acuities, vector analysis, higher-order aberrations, contrast sensitivity, postoperative pain, and haze grade were assessed. The study enrolled 146 eyes (74 patients). At the end of follow-up, 93.84% of eyes had an uncorrected distance visual acuity of 20/20 or better and 97.94% of eyes were within ±0.5 diopter of the targeted spherical refraction. On vector analysis, the mean correction index value was close to 1 and the mean index of success and magnitude of error values were close to 0. The achieved correction vector was on an axis counterclockwise to the axis of the intended correction. Photopic and mesopic contrast sensitivities and ocular and corneal spherical, cylindrical, and corneal coma aberrations significantly improved (all P < .001). A slight amount of trefoil aberration was induced (P < .001, ocular aberration; P < .01, corneal aberration). No eye lost more than 1 line of corrected distance visual acuity. No eye had a haze grade of 2+ degrees or higher throughout the follow-up. Eighteen-month results indicate the efficacy and safety of transepithelial PRK to correct myopia and astigmatism. It improved refraction and quality of vision. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Reconstructing ice-age palaeoclimates: Quantifying low-CO2 effects on plants
NASA Astrophysics Data System (ADS)
Prentice, I. C.; Cleator, S. F.; Huang, Y. H.; Harrison, S. P.; Roulstone, I.
2017-02-01
We present a novel method to quantify the ecophysiological effects of changes in CO2 concentration during the reconstruction of climate changes from fossil pollen assemblages. The method does not depend on any particular vegetation model. Instead, it makes use of general equations from ecophysiology and hydrology that link moisture index (MI) to transpiration and the ratio of leaf-internal to ambient CO2 (χ). Statistically reconstructed MI values are corrected post facto for effects of CO2 concentration. The correction is based on the principle that e, the rate of water loss per unit carbon gain, should be inversely related to effective moisture availability as sensed by plants. The method involves solving a non-linear equation that relates e to MI, temperature and CO2 concentration via the Fu-Zhang relation between evapotranspiration and MI, Monteith's empirical relationship between vapour pressure deficit and evapotranspiration, and recently developed theory that predicts the response of χ to vapour pressure deficit and temperature. The solution to this equation provides a correction term for MI. The numerical value of the correction depends on the reconstructed MI. It is slightly sensitive to temperature, but primarily sensitive to CO2 concentration. Under low LGM CO2 concentration the correction is always positive, implying that LGM climate was wetter than it would seem from vegetation composition. A statistical reconstruction of last glacial maximum (LGM, 21±1 kyr BP) palaeoclimates, based on a new compilation of modern and LGM pollen assemblage data from Australia, is used to illustrate the method in practice. Applying the correction brings pollen-reconstructed LGM moisture availability in southeastern Australia better into line with palaeohydrological estimates of LGM climate.
Estimation of Methane Emissions from Slurry Pits below Pig and Cattle Confinements
Petersen, Søren O.; Olsen, Anne B.; Elsgaard, Lars; Triolo, Jin Mi; Sommer, Sven G.
2016-01-01
Quantifying in-house emissions of methane (CH4) from liquid manure (slurry) is difficult due to high background emissions from enteric processes, yet of great importance for correct estimation of CH4 emissions from manure management and effects of treatment technologies such as anaerobic digestion. In this study CH4 production rates were determined in 20 pig slurry and 11 cattle slurry samples collected beneath slatted floors on six representative farms; rates were determined within 24 h at temperatures close to the temperature in slurry pits at the time of collection. Methane production rates in pig and cattle slurry differed significantly at 0.030 and 0.011 kg CH4 kg-1 VS (volatile solids). Current estimates of CH4 emissions from pig and cattle manure management correspond to 0.032 and 0.015 kg CH4 kg-1, respectively, indicating that slurry pits under animal confinements are a significant source. Fractions of degradable volatile solids (VSd, kg kg-1 VS) were estimated using an aerobic biodegradability assay and total organic C analyses. The VSd in pig and cattle slurry averaged 0.51 and 0.33 kg kg-1 VS, and it was estimated that on average 43 and 28% of VSd in fresh excreta from pigs and cattle, respectively, had been lost at the time of sampling. An empirical model of CH4 emissions from slurry was reparameterised based on experimental results. A sensitivity analysis indicated that predicted CH4 emissions were highly sensitive to uncertainties in the value of lnA of the Arrhenius equation, but much less sensitive to uncertainties in VSd or slurry temperature. A model application indicated that losses of carbon in VS as CO2 may be much greater than losses as CH4. Implications of these results for the correct estimation of CH4 emissions from manure management, and for the mitigation potential of treatments such as anaerobic digestion, are discussed. PMID:27529692
Edge Detection Method Based on Neural Networks for COMS MI Images
NASA Astrophysics Data System (ADS)
Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee
2016-12-01
Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.
Design and fabrication of a freeform phase plate for high-order ocular aberration correction
NASA Astrophysics Data System (ADS)
Yi, Allen Y.; Raasch, Thomas W.
2005-11-01
In recent years it has become possible to measure and in some instances to correct the high-order aberrations of human eyes. We have investigated the correction of wavefront error of human eyes by using phase plates designed to compensate for that error. The wavefront aberrations of the four eyes of two subjects were experimentally determined, and compensating phase plates were machined with an ultraprecision diamond-turning machine equipped with four independent axes. A slow-tool servo freeform trajectory was developed for the machine tool path. The machined phase-correction plates were measured and compared with the original design values to validate the process. The position of the phase-plate relative to the pupil is discussed. The practical utility of this mode of aberration correction was investigated with visual acuity testing. The results are consistent with the potential benefit of aberration correction but also underscore the critical positioning requirements of this mode of aberration correction. This process is described in detail from optical measurements, through machining process design and development, to final results.
Conditions for extreme sensitivity of protein diffusion in membranes to cell environments
Tserkovnyak, Yaroslav; Nelson, David R.
2006-01-01
We study protein diffusion in multicomponent lipid membranes close to a rigid substrate separated by a layer of viscous fluid. The large-distance, long-time asymptotics for Brownian motion are calculated by using a nonlinear stochastic Navier–Stokes equation including the effect of friction with the substrate. The advective nonlinearity, neglected in previous treatments, gives only a small correction to the renormalized viscosity and diffusion coefficient at room temperature. We find, however, that in realistic multicomponent lipid mixtures, close to a critical point for phase separation, protein diffusion acquires a strong power-law dependence on temperature and the distance to the substrate H, making it much more sensitive to cell environment, unlike the logarithmic dependence on H and very small thermal correction away from the critical point. PMID:17008402
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.
Here, we alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ~8–50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the globalmore » average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.« less
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.; ...
2017-09-13
Here, we alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ~8–50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the globalmore » average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.« less
NASA Technical Reports Server (NTRS)
Fulton, C. L.; Harris, R. L., Jr.
1980-01-01
Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.
The performance of the MROI fast tip-tilt correction system
NASA Astrophysics Data System (ADS)
Young, John; Buscher, David; Fisher, Martin; Haniff, Christopher; Rea, Alexander; Seneta, Eugene; Sun, Xiaowei; Wilson, Donald; Farris, Allen; Olivares, Andres
2014-07-01
The fast tip-tilt (FTT) correction system for the Magdalena Ridge Observatory Interferometer (MROI) is being developed by the University of Cambridge. The design incorporates an EMCCD camera protected by a thermal enclosure, optical mounts with passive thermal compensation, and control software running under Xenomai real-time Linux. The complete FTT system is now undergoing laboratory testing prior to being installed on the first MROI unit telescope in the fall of 2014. We are following a twin-track approach to testing the closed-loop performance: tracking tip-tilt perturbations introduced by an actuated flat mirror in the laboratory, and undertaking end-to-end simulations that incorporate realistic higher-order atmospheric perturbations. We report test results that demonstrate (a) the high stability of the entire opto-mechanical system, realized with a completely passive design; and (b) the fast tip-tilt correction performance and limiting sensitivity. Our preliminary results in both areas are close to those needed to realise the ambitious stability and sensitivity goals of the MROI which aims to match the performance of current natural guide star adaptive optics systems.
Brady's Geothermal Field - Analysis of Pressure Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, David
*This submission provides corrections to GDR Submissions 844 and 845* Poroelastic Tomography (PoroTomo) by Adjoint Inverse Modeling of Data from Hydrology. The 3 *csv files containing pressure data are the corrected versions of the pressure dataset found in Submission 844. The dataset has been corrected in the sense that the atmospheric pressure has been subtracted from the total pressure measured in the well. Also, the transducers used at wells 56A-1 and SP-2 are sensitive to surface temperature fluctuations. These temperature effects have been removed from the corrected datasets. The 4th *csv file contains corrected version of the pumping data foundmore » in Submission 845. The data has been corrected in the sense that the data from several wells that were used during the PoroTomo deployment pumping tests that were not included in the original dataset has been added. In addition, several other minor changes have been made to the pumping records due to flow rate instrument calibration issues that were discovered.« less
A university-state-corporation partnership for providing correctional mental health services.
Appelbaum, Kenneth L; Manning, Thomas D; Noonan, John D
2002-02-01
In September 1998 the University of Massachusetts Medical School, in partnership with a private vendor of correctional health care, began providing mental health services and other services to the Massachusetts Department of Correction. The experience with this partnership demonstrates that the involvement of a medical school with a correctional system has advantages for both. The correctional program benefits from enhanced quality of services, assistance with the recruitment and retention of skilled professionals, and expansion of training and continuing education programs. The medical school benefits by building its revenue base while providing a needed public service and through opportunities to extend its research and training activities. Successful collaboration requires that the medical school have an appreciation of security needs, a sensitivity to fiscal issues, and a readiness to work with inmates who have severe mental disorders and disruptive behavior. Correctional administrators, for their part, must support adequate treatment resources and must collaborate in the resolution of tensions between security and health care needs.
Hong, Young-Joo; Makita, Shuichi; Sugiyama, Satoshi; Yasuno, Yoshiaki
2014-01-01
Polarization mode dispersion (PMD) degrades the performance of Jones-matrix-based polarization-sensitive multifunctional optical coherence tomography (JM-OCT). The problem is specially acute for optically buffered JM-OCT, because the long fiber in the optical buffering module induces a large amount of PMD. This paper aims at presenting a method to correct the effect of PMD in JM-OCT. We first mathematically model the PMD in JM-OCT and then derive a method to correct the PMD. This method is a combination of simple hardware modification and subsequent software correction. The hardware modification is introduction of two polarizers which transform the PMD into global complex modulation of Jones matrix. Subsequently, the software correction demodulates the global modulation. The method is validated with an experimentally obtained point spread function with a mirror sample, as well as by in vivo measurement of a human retina. PMID:25657888
Correcting for batch effects in case-control microbiome studies
Gibbons, Sean M.; Duvallet, Claire
2018-01-01
High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses. PMID:29684016
Ozone Correction for AM0 Calibrated Solar Cells for the Aircraft Method
NASA Technical Reports Server (NTRS)
Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Lyons, Valerie J. (Technical Monitor)
2002-01-01
The aircraft solar cell calibration method has provided cells calibrated to space conditions for 37 years. However, it is susceptible to systematic errors due to ozone concentration in the stratosphere. The present correction procedure applies a 1% increase to the measured Isc values. High band-gap cells are more sensitive to ozone adsorbed wavelengths so it has become important to reassess the correction technique. This paper evaluates the ozone correction to be 1+{O3}sup Fo, where Fo is 29.5x10(exp-6)/d.u. for a Silicon solar cell and 42.2xl0(exp -6)/d.u. for a GaAs cell. Results will be presented for high band-gap cells. A comparison with flight data indicates that this method of correcting for the ozone density improves the uncertainty of AM0 Isc to 0.5%.
Duality between QCD perturbative series and power corrections
NASA Astrophysics Data System (ADS)
Narison, S.; Zakharov, V. I.
2009-08-01
We elaborate on the relation between perturbative and power-like corrections to short-distance sensitive QCD observables. We confront theoretical expectations with explicit perturbative calculations existing in literature. As is expected, the quadratic correction is dual to a long perturbative series and one should use one of them but not both. However, this might be true only for very long perturbative series, with number of terms needed in most cases exceeding the number of terms available. What has not been foreseen, the quartic corrections might also be dual to the perturbative series. If confirmed, this would imply a crucial modification of the dogma. We confront this quadratic correction against existing phenomenology (QCD (spectral) sum rules scales, determinations of light quark masses and of αs from τ-decay). We find no contradiction and (to some extent) better agreement with the data and with recent lattice calculations.
Integrated model-based retargeting and optical proximity correction
NASA Astrophysics Data System (ADS)
Agarwal, Kanak B.; Banerjee, Shayak
2011-04-01
Conventional resolution enhancement techniques (RET) are becoming increasingly inadequate at addressing the challenges of subwavelength lithography. In particular, features show high sensitivity to process variation in low-k1 lithography. Process variation aware RETs such as process-window OPC are becoming increasingly important to guarantee high lithographic yield, but such techniques suffer from high runtime impact. An alternative to PWOPC is to perform retargeting, which is a rule-assisted modification of target layout shapes to improve their process window. However, rule-based retargeting is not a scalable technique since rules cannot cover the entire search space of two-dimensional shape configurations, especially with technology scaling. In this paper, we propose to integrate the processes of retargeting and optical proximity correction (OPC). We utilize the normalized image log slope (NILS) metric, which is available at no extra computational cost during OPC. We use NILS to guide dynamic target modification between iterations of OPC. We utilize the NILS tagging capabilities of Calibre TCL scripting to identify fragments with low NILS. We then perform NILS binning to assign different magnitude of retargeting to different NILS bins. NILS is determined both for width, to identify regions of pinching, and space, to locate regions of potential bridging. We develop an integrated flow for 1x metal lines (M1) which exhibits lesser lithographic hotspots compared to a flow with just OPC and no retargeting. We also observe cases where hotspots that existed in the rule-based retargeting flow are fixed using our methodology. We finally also demonstrate that such a retargeting methodology does not significantly alter design properties by electrically simulating a latch layout before and after retargeting. We observe less than 1% impact on latch Clk-Q and D-Q delays post-retargeting, which makes this methodology an attractive one for use in improving shape process windows without perturbing designed values.
Is the Thatcher Illusion Modulated by Face Familiarity? Evidence from an Eye Tracking Study
2016-01-01
Thompson (1980) first detected and described the Thatcher Illusion, where participants instantly perceive an upright face with inverted eyes and mouth as grotesque, but fail to do so when the same face is inverted. One prominent but controversial explanation is that the processing of configural information is disrupted in inverted faces. Studies investigating the Thatcher Illusion either used famous faces or non-famous faces. Highly familiar faces were often thought to be processed in a pronounced configural mode, so they seem ideal candidates to be tested in one Thatcher study against unfamiliar faces–but this has never been addressed so far. In our study, participants evaluated 16 famous and 16 non-famous faces for their grotesqueness. We tested whether familiarity (famous/non-famous faces) modulates reaction times, correctness of grotesqueness assessments (accuracy), and eye movement patterns for the factors orientation (upright/inverted) and Thatcherisation (Thatcherised/non-Thatcherised). On a behavioural level, familiarity effects were only observable via face inversion (higher accuracy and sensitivity for famous compared to non-famous faces) but not via Thatcherisation. Regarding eye movements, however, Thatcherisation influenced the scanning of famous and non-famous faces, for instance, in scanning the mouth region of the presented faces (higher number, duration and dwell time of fixations for famous compared to non-famous faces if Thatcherised). Altogether, famous faces seem to be processed in a more elaborate, more expertise-based way than non-famous faces, whereas non-famous, inverted faces seem to cause difficulties in accurate and sensitive processing. Results are further discussed in the face of existing studies of familiar vs. unfamiliar face processing. PMID:27776145
Kovacevic, Sanja; Azma, Sheeva; Irimia, Andrei; Sherfey, Jason; Halgren, Eric; Marinkovic, Ksenija
2012-01-01
Prior neuroimaging evidence indicates that decision conflict activates medial and lateral prefrontal and parietal cortices. Theoretical accounts of cognitive control highlight anterior cingulate cortex (ACC) as a central node in this network. However, a better understanding of the relative primacy and functional contributions of these areas to decision conflict requires insight into the neural dynamics of successive processing stages including conflict detection, response selection and execution. Moderate alcohol intoxication impairs cognitive control as it interferes with the ability to inhibit dominant, prepotent responses when they are no longer correct. To examine the effects of moderate intoxication on successive processing stages during cognitive control, spatio-temporal changes in total event-related theta power were measured during Stroop-induced conflict. Healthy social drinkers served as their own controls by participating in both alcohol (0.6 g/kg ethanol for men, 0.55 g/kg women) and placebo conditions in a counterbalanced design. Anatomically-constrained magnetoencephalography (aMEG) approach was applied to complex power spectra for theta (4-7 Hz) frequencies. The principal generator of event-related theta power to conflict was estimated to ACC, with contributions from fronto-parietal areas. The ACC was uniquely sensitive to conflict during both early conflict detection, and later response selection and execution stages. Alcohol attenuated theta power to conflict across successive processing stages, suggesting that alcohol-induced deficits in cognitive control may result from theta suppression in the executive network. Slower RTs were associated with attenuated theta power estimated to ACC, indicating that alcohol impairs motor preparation and execution subserved by the ACC. In addition to their relevance for the currently prevailing accounts of cognitive control, our results suggest that alcohol-induced impairment of top-down strategic processing underlies poor self-control and inability to refrain from drinking.
Müller-Staub, Maria; de Graaf-Waar, Helen; Paans, Wolter
2016-11-01
Nurses are accountable to apply the nursing process, which is key for patient care: It is a problem-solving process providing the structure for care plans and documentation. The state-of-the art nursing process is based on classifications that contain standardized concepts, and therefore, it is named Advanced Nursing Process. It contains valid assessments, nursing diagnoses, interventions, and nursing-sensitive patient outcomes. Electronic decision support systems can assist nurses to apply the Advanced Nursing Process. However, nursing decision support systems are missing, and no "gold standard" is available. The study aim is to develop a valid Nursing Process-Clinical Decision Support System Standard to guide future developments of clinical decision support systems. In a multistep approach, a Nursing Process-Clinical Decision Support System Standard with 28 criteria was developed. After pilot testing (N = 29 nurses), the criteria were reduced to 25. The Nursing Process-Clinical Decision Support System Standard was then presented to eight internationally known experts, who performed qualitative interviews according to Mayring. Fourteen categories demonstrate expert consensus on the Nursing Process-Clinical Decision Support System Standard and its content validity. All experts agreed the Advanced Nursing Process should be the centerpiece for the Nursing Process-Clinical Decision Support System and should suggest research-based, predefined nursing diagnoses and correct linkages between diagnoses, evidence-based interventions, and patient outcomes.
Hood, A S; Morrison, J D
2002-01-01
We have measured monocular and binocular contrast sensitivities in response to medium to high spatial frequencies of vertical sinusoidal grating patterns in normal subjects, anisometropic amblyopes, strabismic amblyopes and non-amblyopic esotropes. On binocular viewing, contrast sensitivities were slightly but significantly increased in normal subjects, markedly increased in anisometropes and esotropes with anomalous binocular single vision (BSV) and significantly reduced in esotropes and exotropes without BSV. Application of a prismatic correction to the strabismic eye in order to achieve bifoveal stimulation resulted in a significant reduction in contrast sensitivity in esotropes with and without anomalous BSV, in exotropes and in non-amblyopic esotropes. Control experiments in normal subjects with monocular viewing showed that degradative effects of the prism occurred only with high prism powers and at high spatial frequencies, thus establishing that the reduced contrast sensitivities were the consequence of bifoveal stimulation rather than optical degradation. Displacement of the image of the grating pattern by 2 deg in normal subjects and anisometropes by a dichoptic method to simulate a small angle esotropia had no effect on the contrast sensitivities recorded through the companion eye. By contrast, esotropes showed similar reductions in contrast sensitivity to those obtained with the prism experiments, confirming a fundamental difference between subjects with normal and abnormal ocular alignments. The results have thus established a suppressive action of the fovea of the amblyopic eye acting on the companion, non-amblyopic eye and indicate that correction of ocular misalignments in adult esotropes may be disadvantageous to binocular visual performance. PMID:11956347
Steventon, Jessica J.; Trueman, Rebecca C.; Rosser, Anne E.; Jones, Derek K.
2016-01-01
Background Huge advances have been made in understanding and addressing confounds in diffusion MRI data to quantify white matter microstructure. However, there has been a lag in applying these advances in clinical research. Some confounds are more pronounced in HD which impedes data quality and interpretability of patient-control differences. This study presents an optimised analysis pipeline and addresses specific confounds in a HD patient cohort. Method 15 HD gene-positive and 13 matched control participants were scanned on a 3T MRI system with two diffusion MRI sequences. An optimised post processing pipeline included motion, eddy current and EPI correction, rotation of the B matrix, free water elimination (FWE) and tractography analysis using an algorithm capable of reconstructing crossing fibres. The corpus callosum was examined using both a region-of-interest and a deterministic tractography approach, using both conventional diffusion tensor imaging (DTI)-based and spherical deconvolution analyses. Results Correcting for CSF contamination significantly altered microstructural metrics and the detection of group differences. Reconstructing the corpus callosum using spherical deconvolution produced a more complete reconstruction with greater sensitivity to group differences, compared to DTI-based tractography. Tissue volume fraction (TVF) was reduced in HD participants and was more sensitive to disease burden compared to DTI metrics. Conclusion Addressing confounds in diffusion MR data results in more valid, anatomically faithful white matter tract reconstructions with reduced within-group variance. TVF is recommended as a complementary metric, providing insight into the relationship with clinical symptoms in HD not fully captured by conventional DTI metrics. PMID:26335798
Steventon, Jessica J; Trueman, Rebecca C; Rosser, Anne E; Jones, Derek K
2016-05-30
Huge advances have been made in understanding and addressing confounds in diffusion MRI data to quantify white matter microstructure. However, there has been a lag in applying these advances in clinical research. Some confounds are more pronounced in HD which impedes data quality and interpretability of patient-control differences. This study presents an optimised analysis pipeline and addresses specific confounds in a HD patient cohort. 15 HD gene-positive and 13 matched control participants were scanned on a 3T MRI system with two diffusion MRI sequences. An optimised post processing pipeline included motion, eddy current and EPI correction, rotation of the B matrix, free water elimination (FWE) and tractography analysis using an algorithm capable of reconstructing crossing fibres. The corpus callosum was examined using both a region-of-interest and a deterministic tractography approach, using both conventional diffusion tensor imaging (DTI)-based and spherical deconvolution analyses. Correcting for CSF contamination significantly altered microstructural metrics and the detection of group differences. Reconstructing the corpus callosum using spherical deconvolution produced a more complete reconstruction with greater sensitivity to group differences, compared to DTI-based tractography. Tissue volume fraction (TVF) was reduced in HD participants and was more sensitive to disease burden compared to DTI metrics. Addressing confounds in diffusion MR data results in more valid, anatomically faithful white matter tract reconstructions with reduced within-group variance. TVF is recommended as a complementary metric, providing insight into the relationship with clinical symptoms in HD not fully captured by conventional DTI metrics. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Wang, Xue; Zhao, Kun; Kirberger, Michael; Wong, Hing; Chen, Guantao; Yang, Jenny J
2010-01-01
Calcium binding in proteins exhibits a wide range of polygonal geometries that relate directly to an equally diverse set of biological functions. The binding process stabilizes protein structures and typically results in local conformational change and/or global restructuring of the backbone. Previously, we established the MUG program, which utilized multiple geometries in the Ca2+-binding pockets of holoproteins to identify such pockets, ignoring possible Ca2+-induced conformational change. In this article, we first report our progress in the analysis of Ca2+-induced conformational changes followed by improved prediction of Ca2+-binding sites in the large group of Ca2+-binding proteins that exhibit only localized conformational changes. The MUGSR algorithm was devised to incorporate side chain torsional rotation as a predictor. The output from MUGSR presents groups of residues where each group, typically containing two to five residues, is a potential binding pocket. MUGSR was applied to both X-ray apo structures and NMR holo structures, which did not use calcium distance constraints in structure calculations. Predicted pockets were validated by comparison with homologous holo structures. Defining a “correct hit” as a group of residues containing at least two true ligand residues, the sensitivity was at least 90%; whereas for a “correct hit” defined as a group of residues containing at least three true ligand residues, the sensitivity was at least 78%. These data suggest that Ca2+-binding pockets are at least partially prepositioned to chelate the ion in the apo form of the protein. PMID:20512971
7 CFR 275.16 - Corrective action planning.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false Corrective action planning. 275.16 Section 275.16... Corrective action planning. (a) Corrective action planning is the process by which State agencies shall...)/management unit(s) in the planning, development, and implementation of corrective action are those which: (1...
de Ávila, Renato Ivan; Teixeira, Gabriel Campos; Veloso, Danillo Fabrini Maciel Costa; Moreira, Larissa Cleres; Lima, Eliana Martins; Valadares, Marize Campos
2017-12-01
This study evaluated the applicability of a modified Direct Peptide Reactivity Assay (DPRA) (OECD N° 442C, 2015) through the 10-fold reduction of reaction volume (micro-DPRA, mDPRA) for skin sensitization evaluation of six commercial glyphosate-containing formulations. In addition, another modification of DPRA was proposed by adding a UVA (5J/cm 2 ) irradiation step, namely photo-mDPRA, to better characterize (photo)sensitizer materials. The phototoxicity profile of pesticides was also evaluated using the 3T3 Neutral Red Uptake Phototoxicity Test (3T3-NRU-PT) (OECD N° 432, 2004). The mDPRA could represent an environmentally acceptable test approach, since it reduces costs and organic waste. Peptide depletion was greater in photo-mDPRA and changed the reactivity class of each test material, in comparison to mDPRA. Thus, the association of mDPRA with photo-mDPRA was better for correctly characterizing human (photo)sensitizer substances and pesticides. In general, cysteine depletion was greater than that of lysine for all materials tested in both mDPRA and photo-mDPRA. Furthermore, while 3T3-NRU-PT is unable to predict (photo)sensitizers, it was capable of correctly identifying the phototoxic potential of the tested agrochemical formulations. In conclusion, mDPRA plus photo-mDPRA and 3T3-NRU-PT seem to be preliminary non-animal test batteries for skin (photo)sensitization/phototoxicity assessment of chemicals, agrochemical formulations and their ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.
The discrepancy between risky and riskless utilities: a matter of framing?
Stalmeier, P F; Bezembinder, T G
1999-01-01
Utilities differ according to whether they are derived from risky (gamble) and riskless (visual analog scale, time-tradeoff) assessment methods. The discrepancies are usually explained by assuming that the utilities elicited by risky methods incorporate attitudes towards risk, whereas riskless utilities do not. In (cumulative) prospect theory, risk attitude is conceived as consisting of two components: a decision-weight function (attentiveness to changes in, or sensitivity towards, chance) and a utility function (sensitivity towards outcomes). The authors' data suggest that a framing effect is a hitherto unrecognized and important factor in causing discrepancies between risky and riskless utilities. They collected risky evaluations with the gamble method, and riskless evaluations with difference measurement. Risky utilities were derived using expected-utility theory and prospect theory. With the latter approach, sensitivity towards outcomes and sensitivity towards chance are modeled separately. When the hypothesis that risky utilities from prospect theory coincide with riskless utilities was tested, it was rejected (n = 8, F(1,7) = 132, p = 0.000), suggesting that a correction for sensitivity towards chance is not sufficient to resolve the difference between risky and riskless utilities. Next, it was assumed that different gain/loss frames are induced by risky and riskless elicitation methods. Indeed, identical utility functions were obtained when the gain/loss frames were made identical across methods (n = 7), suggesting that framing was operative. The results suggest that risky and riskless utilities are identical after corrections for sensitivity towards chance and framing.
Multi-segmental movement patterns reflect juggling complexity and skill level.
Zago, Matteo; Pacifici, Ilaria; Lovecchio, Nicola; Galli, Manuela; Federolf, Peter Andreas; Sforza, Chiarella
2017-08-01
The juggling action of six experts and six intermediates jugglers was recorded with a motion capture system and decomposed into its fundamental components through Principal Component Analysis. The aim was to quantify trends in movement dimensionality, multi-segmental patterns and rhythmicity as a function of proficiency level and task complexity. Dimensionality was quantified in terms of Residual Variance, while the Relative Amplitude was introduced to account for individual differences in movement components. We observed that: experience-related modifications in multi-segmental actions exist, such as the progressive reduction of error-correction movements, especially in complex task condition. The systematic identification of motor patterns sensitive to the acquisition of specific experience could accelerate the learning process. Copyright © 2017 Elsevier B.V. All rights reserved.
Data processing in neutron protein crystallography using positron-sensitive detectors
NASA Astrophysics Data System (ADS)
Schoenborn, B. P.
Neutrons provide a unique probe for localizing hydrogen atoms and for distinguishing hydrogen from deuterons. Hydrogen atoms largely determine the three dimensional structure of proteins and are responsible for many catalytic reactions. The study of hydrogen bonding and hydrogen exchange will therefore give insight into reaction mechanisms and conformational fluctuations. In addition, neutrons provide the ability to distinguish N from C and O and to allow correct orientation of groups such as histidine and glutamine. To take advantage of these unique features of neutron crystallography, one needs accurate Fourier maps depicting atomic structure to a high precision. Special attention is given to subtraction of the high background associated with hydrogen containing molecules, which produces a disproportionately large statistical error.
Methane rising from the Deep: Hydrates, Bubbles, Oil Spills, and Global Warming
NASA Astrophysics Data System (ADS)
Leifer, I.; Rehder, G. J.; Solomon, E. A.; Kastner, M.; Asper, V. L.; Joye, S. B.
2011-12-01
Elevated methane concentrations in near-surface waters and the atmosphere have been reported for seepage from depths of nearly 1 km at the Gulf of Mexico hydrate observatory (MC118), suggesting that for some methane sources, deepsea methane is not trapped and can contribute to atmospheric greenhouse gas budgets. Ebullition is key with important sensitivity to the formation of hydrate skins and oil coatings, high-pressure solubility, bubble size and bubble plume processes. Bubble ROV tracking studies showed survival to near thermocline depths. Studies with a numerical bubble propagation model demonstrated that consideration of structure I hydrate skins transported most methane only to mid-water column depths. Instead, consideration of structure II hydrates, which are stable to far shallower depths and appropriate for natural gas mixtures, allows bubbles to survive to far shallower depths. Moreover, model predictions of vertical methane and alkane profiles and bubble size evolution were in better agreement with observations after consideration of structure II hydrate properties as well as an improved implementation of plume properties, such as currents. These results demonstrate the importance of correctly incorporating bubble hydrate processes in efforts to predict the impact of deepsea seepage as well as to understand the fate of bubble-transported oil and methane from deepsea pipeline leaks and well blowouts. Application to the DWH spill demonstrated the importance of deepsea processes to the fate of spilled subsurface oil. Because several of these parameters vary temporally (bubble flux, currents, temperature), sensitivity studies indicate the importance of real-time monitoring data.
The SAMI Galaxy Survey: can we trust aperture corrections to predict star formation?
NASA Astrophysics Data System (ADS)
Richards, S. N.; Bryant, J. J.; Croom, S. M.; Hopkins, A. M.; Schaefer, A. L.; Bland-Hawthorn, J.; Allen, J. T.; Brough, S.; Cecil, G.; Cortese, L.; Fogarty, L. M. R.; Gunawardhana, M. L. P.; Goodwin, M.; Green, A. W.; Ho, I.-T.; Kewley, L. J.; Konstantopoulos, I. S.; Lawrence, J. S.; Lorente, N. P. F.; Medling, A. M.; Owers, M. S.; Sharp, R.; Sweet, S. M.; Taylor, E. N.
2016-01-01
In the low-redshift Universe (z < 0.3), our view of galaxy evolution is primarily based on fibre optic spectroscopy surveys. Elaborate methods have been developed to address aperture effects when fixed aperture sizes only probe the inner regions for galaxies of ever decreasing redshift or increasing physical size. These aperture corrections rely on assumptions about the physical properties of galaxies. The adequacy of these aperture corrections can be tested with integral-field spectroscopic data. We use integral-field spectra drawn from 1212 galaxies observed as part of the SAMI Galaxy Survey to investigate the validity of two aperture correction methods that attempt to estimate a galaxy's total instantaneous star formation rate. We show that biases arise when assuming that instantaneous star formation is traced by broad-band imaging, and when the aperture correction is built only from spectra of the nuclear region of galaxies. These biases may be significant depending on the selection criteria of a survey sample. Understanding the sensitivities of these aperture corrections is essential for correct handling of systematic errors in galaxy evolution studies.
MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner.
Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B; Michel, Christian J; El Fakhri, Georges; Schmand, Matthias; Sorensen, A Gregory
2011-01-01
Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MRI data can be used for motion tracking. In this work, a novel algorithm for data processing and rigid-body motion correction (MC) for the MRI-compatible BrainPET prototype scanner is described, and proof-of-principle phantom and human studies are presented. To account for motion, the PET prompt and random coincidences and sensitivity data for postnormalization were processed in the line-of-response (LOR) space according to the MRI-derived motion estimates. The processing time on the standard BrainPET workstation is approximately 16 s for each motion estimate. After rebinning in the sinogram space, the motion corrected data were summed, and the PET volume was reconstructed using the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed, and motion estimates were obtained using 2 high-temporal-resolution MRI-based motion-tracking techniques. After accounting for the misalignment between the 2 scanners, perfectly coregistered MRI and PET volumes were reproducibly obtained. The MRI output gates inserted into the PET list-mode allow the temporal correlation of the 2 datasets within 0.2 ms. The Hoffman phantom volume reconstructed by processing the PET data in the LOR space was similar to the one obtained by processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the procedure. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 s and 20 ms, respectively. Motion-deblurred PET images, with excellent delineation of specific brain structures, were obtained using these 2 MRI-based estimates. An MRI-based MC algorithm was implemented for an integrated MR-PET scanner. High-temporal-resolution MRI-derived motion estimates (obtained while simultaneously acquiring anatomic or functional MRI data) can be used for PET MC. An MRI-based MC method has the potential to improve PET image quality, increasing its reliability, reproducibility, and quantitative accuracy, and to benefit many neurologic applications.
Mulla, Mubashir; Schulte, Klaus-Martin
2012-01-01
Cervical lymph nodes (CLNs) are the most common site of metastases in papillary thyroid cancer (PTC). Ultrasound scan (US) is the most commonly used imaging modality in the evaluation of CLNs in PTC. Computerised tomography (CT) and 18fluorodeoxyglucose positron emission tomography (18FDG PET–CT) are used less commonly. It is widely believed that the above imaging techniques should guide the surgical approach to the patient with PTC. Methods We performed a systematic review of imaging studies from the literature assessing the usefulness for the detection of metastatic CLNs in PTC. We evaluated the author's interpretation of their numeric findings specifically with regard to ‘sensitivity’ and ‘negative predictive value’ (NPV) by comparing their use against standard definitions of these terms in probabilistic statistics. Results A total of 16 studies used probabilistic terms to describe the value of US for the detection of LN metastases. Only 6 (37.5%) calculated sensitivity and NPV correctly. For CT, out of the eight studies, only 1 (12.5%) used correct terms to describe analytical results. One study looked at magnetic resonance imaging, while three assessed 18FDG PET–CT, none of which provided correct calculations for sensitivity and NPV. Conclusion Imaging provides high specificity for the detection of cervical metastases of PTC. However, sensitivity and NPV are low. The majority of studies reporting on a high sensitivity have not used key terms according to standard definitions of probabilistic statistics. Against common opinion, there is no current evidence that failure to find LN metastases on ultrasound or cross-sectional imaging can be used to guide surgical decision making. PMID:23781308
Rawal, Shristi; Hoffman, Howard J.; Chapo, Audrey K.
2015-01-01
Introduction The 2011–14 US National Health and Nutrition Examination Survey chemosensory protocol asks adults to self-rate their orthonasal (via nostrils) and retronasal (via mouth) smell abilities for subsequent odor identification testing. From data collected with a similar protocol, we aimed to identify a self-reported olfactory index that showed the best sensitivity (correctly identifying dysfunction) and specificity (correctly indentifying normosmia) with measured olfaction. Methods In home-based testing, 121 independent-living older women (age 73±7 years) reported their olfactory function by interviewer-administered survey. Olfactory function was measured orthonasally via composite (odor threshold, identification task) or identification task alone. Results Only 16 % of women self-rated “below average” smell function. More women perceived loss of smell (38 %) or flavor (30 %) with aging. The rate of measured dysfunction was 30 % by composite (threshold and identification) and 21.5 % by identification task, the latter misclassifying some mild dysfunction as normosmia. An index of self-rated smell function and perceived loss yielded the most favorable sensitivity (65 %) and specificity (77 %) to measured function. Self-rated olfaction showed better agreement with severe measured dysfunction; mild dysfunction was less noticed. Conclusions Self-reported indices that query about current and perceived changes in smell and flavor with aging showed better sensitivity estimates than those previously reported. Specificity was somewhat lower—some older adults may correctly perceive loss unidentified in a single assessment, or have a retronasal impairment that was undetected by an orthonasal measure. Implications Our findings should inform self-rated measures that screen for severe olfactory dysfunction in clinical/community settings where testing is not routine. PMID:25866597
Schanzlin, D J
1999-01-01
PURPOSE: Intrastromal corneal ring segments (ICRS) were investigated for safety and reliability in the correction of low to moderate myopic refractive errors. METHODS: Initially, 74 patients with spherical equivalent refractive errors between -1.00 and -4.25 diopters (D) received the ICRS in 1 eye. After 6 months, 51 of these patients received the ICRS in the contralateral eye. The total number of eyes investigated was 125. The outcome measures were uncorrected and best-corrected visual acuity, predictability and stability of the refraction, refractive astigmatism, contrast sensitivity, and endothelial cell morphology. RESULTS: The 89 eyes with 12-month follow-up showed significant improvement with uncorrected visual acuities of 20/16 or better in 37%, 20/20 or better in 62%, and 20/40 or better in 97%. Cycloplegic refraction spherical equivalents showed that 68% of the eyes were within +/- 0.50 D and 90% within +/- 1.00 D of the intended correction. Refractive stability was present by 3 months after the surgery. Only 1 patients had a loss greater than 2 lines or 10 letters of best spectacle-corrected visual acuity, but the patient's acuity was 20/20. Refractive cylinder, contrast sensitivity, and endothelial cell morphology were not adversely affected. The ICRS was removed from the eyes of 6 patients. Three removals were prompted by glare and double images occurring at night; 3 were for nonmedical reasons. All patients returned to within +/- 1.00 D of their preoperative refractive spherical equivalent, and no patients lost more than 1 line of best corrected visual acuity by 3 months after ICRS removal. CONCLUSION: The ICRS safely and reliably corrects myopic refractive errors between -1.00 and -4.50 D. Images FIGURE 1 FIGURE 2 FIGURE 3 FIGURE 6 FIGURE 7 FIGURE 8 FIGURE 9 FIGURE 10 FIGURE 11 FIGURE 12 PMID:10703146
Comparison of ring artifact removal methods using flat panel detector based CT images
2011-01-01
Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411
LAPR: An experimental aircraft pushbroom scanner
NASA Technical Reports Server (NTRS)
Wharton, S. W.; Irons, J. I.; Heugel, F.
1980-01-01
A three band Linear Array Pushbroom Radiometer (LAPR) was built and flown on an experimental basis by NASA at the Goddard Space Flight Center. The functional characteristics of the instrument and the methods used to preprocess the data, including radiometric correction, are described. The radiometric sensitivity of the instrument was tested and compared to that of the Thematic Mapper and the Multispectral Scanner. The radiometric correction procedure was evaluated quantitatively, using laboratory testing, and qualitatively, via visual examination of the LAPR test flight imagery. Although effective radiometric correction could not yet be demonstrated via laboratory testing, radiometric distortion did not preclude the visual interpretation or parallel piped classification of the test imagery.
Error Correction for the JLEIC Ion Collider Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Guohui; Morozov, Vasiliy; Lin, Fanglei
2016-05-01
The sensitivity to misalignment, magnet strength error, and BPM noise is investigated in order to specify design tolerances for the ion collider ring of the Jefferson Lab Electron Ion Collider (JLEIC) project. Those errors, including horizontal, vertical, longitudinal displacement, roll error in transverse plane, strength error of main magnets (dipole, quadrupole, and sextupole), BPM noise, and strength jitter of correctors, cause closed orbit distortion, tune change, beta-beat, coupling, chromaticity problem, etc. These problems generally reduce the dynamic aperture at the Interaction Point (IP). According to real commissioning experiences in other machines, closed orbit correction, tune matching, beta-beat correction, decoupling, andmore » chromaticity correction have been done in the study. Finally, we find that the dynamic aperture at the IP is restored. This paper describes that work.« less
Utility of the serum C-reactive protein for detection of occult bacterial infection in children.
Isaacman, Daniel J; Burke, Bonnie L
2002-09-01
To assess the utility of serum C-reactive protein (CRP) as a screen for occult bacterial infection in children. Febrile children ages 3 to 36 months who visited an urban children's hospital emergency department and received a complete blood cell count and blood culture as part of their evaluation were prospectively enrolled from February 2, 2000, through May 30, 2001. Informed consent was obtained for the withdrawal of an additional 1-mL aliquot of blood for use in CRP evaluation. Logistic regression and receiver operator characteristic (ROC) curves were modeled for each predictor to identify optimal test values, and were compared using likelihood ratio tests. Two hundred fifty-six patients were included in the analysis, with a median age of 15.3 months (range, 3.1-35.2 months) and median temperature at triage 40.0 degrees C (range, 39.0 degrees C-41.3 degrees C). Twenty-nine (11.3%) cases of occult bacterial infection (OBI) were identified, including 17 cases of pneumonia, 9 cases of urinary tract infection, and 3 cases of bacteremia. The median white blood cell count in this data set was 12.9 x 10(3)/ micro L [corrected] (range, 3.6-39.1 x10(3)/ micro L) [corrected], the median absolute neutrophil count (ANC) was 7.12 x 10(3)/L [corrected] (range, 0.56-28.16 x10(3)/L) [corrected], and the median CRP level was 1.7 mg/dL (range, 0.2-43.3 mg/dL). The optimal cut-off point for CRP in this data set (4.4 mg/dL) achieved a sensitivity of 63% and a specificity of 81% for detection of OBI in this population. Comparing models using cut-off values from individual laboratory predictors (ANC, white blood cell count, and CRP) that maximized sensitivity and specificity revealed that a model using an ANC of 10.6 x10(3)/L [corrected] (sensitivity, 69%; specificity, 79%) was the best predictive model. Adding CRP to the model insignificantly increased sensitivity to 79%, while significantly decreasing specificity to 50%. Active monitoring of emergency department blood cultures drawn during the study period from children between 3 and 36 months of age showed an overall bacteremia rate of 1.1% during this period. An ANC cut-off point of 10.6 x10(3)/L [corrected] offers the best predictive model for detection of occult bacterial infection using a single test. The addition of CRP to ANC adds little diagnostic utility. Furthermore, the lowered incidence of occult bacteremia in our population supports a decrease in the use of diagnostic screening in this population.
Sensitivity and cost considerations for the detection and eradication of marine pests in ports.
Hayes, Keith R; Cannon, Rob; Neil, Kerry; Inglis, Graeme
2005-08-01
Port surveys are being conducted in Australia, New Zealand and around the world to confirm the presence or absence of particular marine pests. The most critical aspect of these surveys is their sensitivity-the probability that they will correctly identify a species as present if indeed it is present. This is not, however, adequately addressed in the relevant national and international standards. Simple calculations show that the sensitivity of port survey methods is closely related to their encounter rate-the average number of target individuals expected to be detected by the method. The encounter rate (which reflects any difference in relative pest density), divided by the cost of the method, provides one way to compare the cost-effectiveness of different survey methods. The most cost-effective survey method is site- and species-specific but, in general, will involve sampling from the habitat with the highest expected population of target individuals. A case study of Perna viridis in Trinity Inlet, Cairns, demonstrates that plankton trawls processed with gene probes provide the same level of sensitivity for a fraction of the cost associated with the next best available method-snorkel transects in bad visibility (secchi depth=0.72 m). Visibility and the adult/larvae ratio, however, are critical to these arguments. If visibility were good (secchi depth=10 m), the two approaches would be comparable. Diver deployed quadrats were at least three orders of magnitude less cost-effective in this case study. It is very important that environmental managers and scientists perform sensitivity calculations before embarking on port surveys to ensure the highest level of sensitivity is achieved for any given budget.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Hui; Rasch, Philip J.; Zhang, Kai
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivitymore » of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.« less
NASA Astrophysics Data System (ADS)
Bowers, David L.; Boger, James K.; Wellems, L. David; Black, Wiley T.; Ortega, Steve E.; Ratliff, Bradley M.; Fetrow, Matthew P.; Hubbs, John E.; Tyo, J. Scott
2006-05-01
Recent developments for Long Wave InfraRed (LWIR) imaging polarimeters include incorporating a microgrid polarizer array onto the focal plane array (FPA). Inherent advantages over typical polarimeters include packaging and instantaneous acquisition of thermal and polarimetric information. This allows for real time video of thermal and polarimetric products. The microgrid approach has inherent polarization measurement error due to the spatial sampling of a non-uniform scene, residual pixel to pixel variations in the gain corrected responsivity and in the noise equivalent input (NEI), and variations in the pixel to pixel micro-polarizer performance. The Degree of Linear Polarization (DoLP) is highly sensitive to these parameters and is consequently used as a metric to explore instrument sensitivities. Image processing and fusion techniques are used to take advantage of the inherent thermal and polarimetric sensing capability of this FPA, providing additional scene information in real time. Optimal operating conditions are employed to improve FPA uniformity and sensitivity. Data from two DRS Infrared Technologies, L.P. (DRS) microgrid polarizer HgCdTe FPAs are presented. One FPA resides in a liquid nitrogen (LN2) pour filled dewar with a 80°K nominal operating temperature. The other FPA resides in a cryogenic (cryo) dewar with a 60° K nominal operating temperature.
Crowdsourcing for error detection in cortical surface delineations.
Ganz, Melanie; Kondermann, Daniel; Andrulis, Jonas; Knudsen, Gitte Moos; Maier-Hein, Lena
2017-01-01
With the recent trend toward big data analysis, neuroimaging datasets have grown substantially in the past years. While larger datasets potentially offer important insights for medical research, one major bottleneck is the requirement for resources of medical experts needed to validate automatic processing results. To address this issue, the goal of this paper was to assess whether anonymous nonexperts from an online community can perform quality control of MR-based cortical surface delineations derived by an automatic algorithm. So-called knowledge workers from an online crowdsourcing platform were asked to annotate errors in automatic cortical surface delineations on 100 central, coronal slices of MR images. On average, annotations for 100 images were obtained in less than an hour. When using expert annotations as reference, the crowd on average achieves a sensitivity of 82 % and a precision of 42 %. Merging multiple annotations per image significantly improves the sensitivity of the crowd (up to 95 %), but leads to a decrease in precision (as low as 22 %). Our experiments show that the detection of errors in automatic cortical surface delineations generated by anonymous untrained workers is feasible. Future work will focus on increasing the sensitivity of our method further, such that the error detection tasks can be handled exclusively by the crowd and expert resources can be focused on error correction.
Guo, Yingkun; Zheng, Hairong; Sun, Phillip Zhe
2015-01-01
Chemical exchange saturation transfer (CEST) MRI is a versatile imaging method that probes the chemical exchange between bulk water and exchangeable protons. CEST imaging indirectly detects dilute labile protons via bulk water signal changes following selective saturation of exchangeable protons, which offers substantial sensitivity enhancement and has sparked numerous biomedical applications. Over the past decade, CEST imaging techniques have rapidly evolved due to contributions from multiple domains, including the development of CEST mathematical models, innovative contrast agent designs, sensitive data acquisition schemes, efficient field inhomogeneity correction algorithms, and quantitative CEST (qCEST) analysis. The CEST system that underlies the apparent CEST-weighted effect, however, is complex. The experimentally measurable CEST effect depends not only on parameters such as CEST agent concentration, pH and temperature, but also on relaxation rate, magnetic field strength and more importantly, experimental parameters including repetition time, RF irradiation amplitude and scheme, and image readout. Thorough understanding of the underlying CEST system using qCEST analysis may augment the diagnostic capability of conventional imaging. In this review, we provide a concise explanation of CEST acquisition methods and processing algorithms, including their advantages and limitations, for optimization and quantification of CEST MRI experiments. PMID:25641791
Argirò, Renato; Diacinti, Daniele; Sacconi, Beatrice; Iannarelli, Angelo; Diacinti, Davide; Cipriani, Cristiana; Pisani, Daniela; Romagnoli, Elisabetta; Biffoni, Marco; Di Gioia, Cira; Pepe, Jessica; Bezzi, Mario; Letizia, Claudio; Minisola, Salvatore; Catalano, Carlo
2018-05-07
To evaluate the diagnostic performance of 3TMRI in comparison with ultrasound (US) and 99mTc-sestamibi scan for presurgical localisation of parathyroid adenomas (PTAs) in patients with primary hyperparathyroidism (PHPT). Fifty-seven patients affected by PHPT were prospectively enrolled and underwent US, 99mTc-sestamibi and 3TMRI. T2-weighted and post-contrast T1-weighted Iterative decomposition of water and fat with Echo Asymmetry and Least squares estimation (IDEAL) sequences were acquired. Diagnostic performance of US, 99mTc-sestamibi and MRI in localising PTAs to correct quadrant were compared according to surgical and pathological findings. According to surgical findings, US correctly localised 41/46 PTAs (sensitivity of 89.1%; specificity 97.5%; PPV 93.1% and NPV 95.6%); 99mTc-sestamibi correctly localised 38/46 PTAs (sensitivity 83.6%, specificity 98.3%, PPV 95% and NPV 93.7%). US and 99mTc-sestamibi combined had a sensitivity of 93.4% (43/46 PTAs), specificity of 98.3%, PPV 95% and NPV 98.3%. MRI correctly localised 45/46 PTAs (sensitivity 97.8%; specificity 97.5%; PPV 93.7% and NPV 99.2%). MRI was able to detect six adenomas missed by 99mTc-sestamibi and two adenomas missed by US. MRI and US were able to detect all enlarged parathyroid glands in patients with multiglandular disease. MRI identified six of seven ectopic adenomas. Our study demonstrated high diagnostic performance of 3T MRI in the preoperative PTAs quadrant localisation, as well as in patients with multiglandular disease and ectopic PTAs. MRI may be preferred to adequately select patient candidates for minimally invasive parathyroidectomy (MIP). • PTA(s) quadrant localisation by 3TMRI was more accurate than US+99mTc-sestamibi. • MRI identified all enlarged glands in multiglandular disease similarly to US. • MRI identified 6/7 ectopic PTAs similarly to 99mTc-sestamibi. • Presurgical PTA(s) localisation by 3TMRI select the optimal candidates for MIP.
Ultrasonography in diagnosing clinically occult groin hernia: systematic review and meta-analysis.
Kwee, Robert M; Kwee, Thomas C
2018-05-14
To provide an updated systematic review on the performance of ultrasonography (US) in diagnosing clinically occult groin hernia. A systematic search was performed in MEDLINE and Embase. Methodological quality of included studies was assessed. Accuracy data of US in detecting clinically occult groin hernia were extracted. Positive predictive value (PPV) was pooled with a random effects model. For studies investigating the performance of US in hernia type classification (inguinal vs femoral), correctly classified proportion was assessed. Sixteen studies were included. In the two studies without verification bias, sensitivities were 29.4% [95% confidence interval (CI), 15.1-47.5%] and 90.9% (95% CI, 70.8-98.9%); specificities were 90.0% (95% CI, 80.5-95.9%) and 90.6% (95% CI, 83.0-95.6%). Verification bias or a variation of it (i.e. study limited to only subjects with definitive proof of disease status) was present in all other studies. Sensitivity, specificity, and negative predictive value (NPV) were not pooled. PPV ranged from 58.8 to 100%. Pooled PPV, based on data from ten studies with low risk of bias and no applicability concerns with respect to patient selection, was 85.6% (95% CI, 76.5-92.7%). Proportion of correctly classified hernias, based on data from four studies, ranged between 94.4% and 99.1%. Sensitivity, specificity and NPV of US in detecting clinically occult groin hernia cannot reliably be determined based on current evidence. Further studies are necessary. Accuracy may strongly depend on the examiner's skills. PPV is high. Inguinal and femoral hernias can reliably be differentiated by US. • Sensitivity, specificity and NPV of ultrasound in detecting clinically occult groin hernia cannot reliably be determined based on current evidence. • Accuracy may strongly depend on the examiner's skills. • PPV of US in detection of clinically occult groin hernia is high [pooled PPV of 85.6% (95% confidence interval, 76.5-92.7%)]. • US has very high performance in correctly differentiating between clinically occult inguinal and femoral hernia (correctness of 94.4- 99.1%).
Alio, Jorge L; Plaza-Puche, Ana B; Javaloy, Jaime; Ayala, María José; Moreno, Luis J; Piñero, David P
2012-03-01
To compare the visual acuity outcomes and ocular optical performance of eyes implanted with a multifocal refractive intraocular lens (IOL) with an inferior segmental near add or a diffractive multifocal IOL. Prospective, comparative, nonrandomized, consecutive case series. Eighty-three consecutive eyes of 45 patients (age range, 36-82 years) with cataract were divided into 2 groups: group A, 45 eyes implanted with Lentis Mplus LS-312 (Oculentis GmbH, Berlin, Germany); group B, 38 eyes implanted with diffractive IOL Acri.Lisa 366D (Zeiss, Oberkochen, Germany). All patients underwent phacoemulsification followed by IOL implantation in the capsular bag. Distance corrected, intermediate, and near with the distance correction visual acuity outcomes and contrast sensitivity, intraocular aberrations, and defocus curve were evaluated postoperatively during a 3-month follow-up. Uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), uncorrected near visual acuity (UNVA), corrected distance near and intermediate visual acuity (CDNVA), contrast sensitivity, intraocular aberrations, and defocus curve. A significant improvement in UDVA, CDVA, and UNVA was observed in both groups after surgery (P ≤ 0.04). Significantly better values of UNVA (P<0.01) and CDNVA (P<0.04) were found in group B. In the defocus curve, significantly better visual acuities were present in eyes in group A for intermediate vision levels of defocus (P ≤ 0.04). Significantly higher amounts of postoperative intraocular primary coma and spherical aberrations were found in group A (P<0.01). In addition, significantly better values were observed in photopic contrast sensitivity for high spatial frequencies in group A (P ≤ 0.04). The Lentis Mplus LS-312 and Acri.Lisa 366D IOLs are able to successfully restore visual function after cataract surgery. The Lentis Mplus LS-312 provided better intermediate vision and contrast sensitivity outcomes than the Acri.Lisa 366D. However, the Acri.Lisa design provided better distance and near visual outcomes and intraocular optical performance parameters. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Real-time fMRI processing with physiological noise correction - Comparison with off-line analysis.
Misaki, Masaya; Barzigar, Nafise; Zotev, Vadim; Phillips, Raquel; Cheng, Samuel; Bodurka, Jerzy
2015-12-30
While applications of real-time functional magnetic resonance imaging (rtfMRI) are growing rapidly, there are still limitations in real-time data processing compared to off-line analysis. We developed a proof-of-concept real-time fMRI processing (rtfMRIp) system utilizing a personal computer (PC) with a dedicated graphic processing unit (GPU) to demonstrate that it is now possible to perform intensive whole-brain fMRI data processing in real-time. The rtfMRIp performs slice-timing correction, motion correction, spatial smoothing, signal scaling, and general linear model (GLM) analysis with multiple noise regressors including physiological noise modeled with cardiac (RETROICOR) and respiration volume per time (RVT). The whole-brain data analysis with more than 100,000voxels and more than 250volumes is completed in less than 300ms, much faster than the time required to acquire the fMRI volume. Real-time processing implementation cannot be identical to off-line analysis when time-course information is used, such as in slice-timing correction, signal scaling, and GLM. We verified that reduced slice-timing correction for real-time analysis had comparable output with off-line analysis. The real-time GLM analysis, however, showed over-fitting when the number of sampled volumes was small. Our system implemented real-time RETROICOR and RVT physiological noise corrections for the first time and it is capable of processing these steps on all available data at a given time, without need for recursive algorithms. Comprehensive data processing in rtfMRI is possible with a PC, while the number of samples should be considered in real-time GLM. Copyright © 2015 Elsevier B.V. All rights reserved.
Sensitivity and daily quality control of a mobile PET/CT scanner operating in 3-dimensional mode.
Belakhlef, Abdelfatihe; Church, Clifford; Fraser, Ron; Lakhanpal, Suresh
2007-12-01
This study investigated the stability of the sensitivity of a mobile PET/CT scanner and tested a phantom experiment to improve on the daily quality control recommendations of the manufacturer. Unlike in-house scanners, mobile PET/CT devices are subjected to a harsher, continuously changing environment that can alter their performance. The parameter of sensitivity was investigated because it reflects directly on standardized uptake value, a key factor in cancer evaluation. A (68)Ge phantom of known activity concentration was scanned 6 times a month for 11 consecutive months using a mobile PET/CT scanner that operates in 3-dimensional mode only. The scans were acquired as 2 contiguous bed positions, with raw data obtained and reconstructed using parameters identical to those used for oncology patients, including CT-extracted attenuation coefficients and decay, scatter, geometry, and randoms corrections. After visual inspection of all reconstructed images, identical regions of interest were drawn on each image to obtain the activity concentration of individual slices. The original activity concentration was then decay-corrected to the scanning day, and the percentage sensitivity of the slice was calculated and graphed. The daily average sensitivity of the scanner, over 11 consecutive months, was also obtained and used to evaluate the stability of sensitivity. Our particular scanner showed a daily average sensitivity ranging from -8.6% to 6.5% except for one instance, when the sensitivity dropped by an unacceptable degree, 34.8%. Our 11-mo follow-up of a mobile PET/CT scanner demonstrated that its sensitivity remained within acceptable clinical limits except for one instance, when the scanner had to be serviced before patients could be imaged. To enhance our confidence in the uniformity of sensitivity across slices, we added a phantom scan to the daily quality control recommendations of the manufacturer.