NASA Astrophysics Data System (ADS)
Mustak, S.
2013-09-01
The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.
NASA Technical Reports Server (NTRS)
Jefferies, S. M.; Duvall, T. L., Jr.
1991-01-01
A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.
A new technique for correction of simple congenital earlobe clefts: diametric hinge flaps method.
Qing, Yong; Cen, Ying; Xu, Xuewen; Chen, Junjie
2013-06-01
The earlobe plays an important part in the aesthetic appearance of the auricle. Congenital cleft earlobe may vary considerably in severity from a simple notching to extensive tissue deficiency. Most patients with cleft earlobe require surgical correction because of abnormal appearance. In this article, a new surgical technique for correcting congenital simple cleft earlobe using diametric hinge flaps is introduced. We retrospectively reviewed 4 patients diagnosed with congenital cleft earlobe between 2008 and 2010. All of them received this new surgical method. The patients were followed up from 3 to 6 months. All patients attained relatively full bodied earlobes with smooth contours, inconspicuous scars, and found their reconstructed earlobes to be aesthetically satisfactory. One patient experienced hypoesthesia in the area operated on, but recovered 3 months later. No other complications were noted. This simple method not only makes full use of the surrounding tissues to reconstruct full bodied earlobes but also avoids small notch formation caused by the linear scar contraction sometimes seen when using more traditional methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriarty, Tom
The NREL cell measurement lab measures the IV parameters of cells of multiple sizes and configurations. A large contributing factor to errors and uncertainty in Jsc, Imax, Pmax and efficiency can be the irradiance spatial nonuniformity. Correcting for this nonuniformity through its precise and frequent measurement can be very time consuming. This paper explains a simple, fast and effective method based on bicubic interpolation for determining and correcting for spatial nonuniformity and verification of the method's efficacy.
NASA Astrophysics Data System (ADS)
Prasanna, V.
2018-01-01
This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.
NASA Astrophysics Data System (ADS)
Hai-Jung In,; Oh-Kyong Kwon,
2010-03-01
A simple pixel structure using a video data correction method is proposed to compensate for electrical characteristic variations of driving thin-film transistors (TFTs) and the degradation of organic light-emitting diodes (OLEDs) in active-matrix OLED (AMOLED) displays. The proposed method senses the electrical characteristic variations of TFTs and OLEDs and stores them in external memory. The nonuniform emission current of TFTs and the aging of OLEDs are corrected by modulating video data using the stored data. Experimental results show that the emission current error due to electrical characteristic variation of driving TFTs is in the range from -63.1 to 61.4% without compensation, but is decreased to the range from -1.9 to 1.9% with the proposed correction method. The luminance error due to the degradation of an OLED is less than 1.8% when the proposed correction method is used for a 50% degraded OLED.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li
2009-09-28
A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
SU-F-R-33: Can CT and CBCT Be Used Simultaneously for Radiomics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, R; Wang, J; Zhong, H
2016-06-15
Purpose: To investigate whether CBCT and CT can be used in radiomics analysis simultaneously. To establish a batch correction method for radiomics in two similar image modalities. Methods: Four sites including rectum, bladder, femoral head and lung were considered as region of interest (ROI) in this study. For each site, 10 treatment planning CT images were collected. And 10 CBCT images which came from same site of same patient were acquired at first radiotherapy fraction. 253 radiomics features, which were selected by our test-retest study at rectum cancer CT (ICC>0.8), were calculated for both CBCT and CT images in MATLAB.more » Simple scaling (z-score) and nonlinear correction methods were applied to the CBCT radiomics features. The Pearson Correlation Coefficient was calculated to analyze the correlation between radiomics features of CT and CBCT images before and after correction. Cluster analysis of mixed data (for each site, 5 CT and 5 CBCT data are randomly selected) was implemented to validate the feasibility to merge radiomics data from CBCT and CT. The consistency of clustering result and site grouping was verified by a chi-square test for different datasets respectively. Results: For simple scaling, 234 of the 253 features have correlation coefficient ρ>0.8 among which 154 features haveρ>0.9 . For radiomics data after nonlinear correction, 240 of the 253 features have ρ>0.8 among which 220 features have ρ>0.9. Cluster analysis of mixed data shows that data of four sites was almost precisely separated for simple scaling(p=1.29 * 10{sup −7}, χ{sup 2} test) and nonlinear correction (p=5.98 * 10{sup −7}, χ{sup 2} test), which is similar to the cluster result of CT data (p=4.52 * 10{sup −8}, χ{sup 2} test). Conclusion: Radiomics data from CBCT can be merged with those from CT by simple scaling or nonlinear correction for radiomics analysis.« less
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
A Simple Spreadsheet Program for the Calculation of Lattice-Site Distributions
ERIC Educational Resources Information Center
McCaffrey, John G.
2009-01-01
A simple spreadsheet program is presented that can be used by undergraduate students to calculate the lattice-site distributions in solids. A major strength of the method is the natural way in which the correct number of ions or atoms are present, or absent, at specific lattice distances. The expanding-cube method utilized is straightforward to…
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
Our method of correcting cryptotia.
Yanai, A; Tange, I; Bandoh, Y; Tsuzuki, K; Sugino, H; Nagata, S
1988-12-01
Our technique for the correction of cryptotia using both Z-plasty and the advancement flap is described. The main advantages are the simple design of the skin incision and the possibility of its application to cryptotia other than severe cartilage deformity and extreme lack of skin.
Further applications of Archimedes' principle in the correction of asymmetrical breasts.
Schultz, R C; Dolezal, R F; Nolan, J
1986-02-01
Archimedes' law of buoyancy has been extended to the preoperative bedside assessment of volume differences between breasts, whatever their cause. The simple method described has proved to be a helpful aid in surgical procedures for the correction of breast asymmetry.
Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants
NASA Astrophysics Data System (ADS)
Bzdak, Adam; Holzmann, Romain; Koch, Volker
2016-12-01
In this article we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012), 10.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015), 10.1103/PhysRevC.91.027901]. We will discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.
Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants
Bzdak, Adam; Holzmann, Romain; Koch, Volker
2016-12-19
Here, we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012)PRVCAN0556-281310.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015)PRVCAN0556-281310.1103/PhysRevC.91.027901]. We will then discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.
Wall proximity corrections for hot-wire readings in turbulent flows
NASA Technical Reports Server (NTRS)
Hebbar, K. S.
1980-01-01
This note describes some details of recent (successful) attempts of wall proximity corrections for hot-wire measurements performed in a three-dimensional incompressible turbulent boundary layer. A simple and quite satisfactory method of estimating wall proximity effects on hot-wire readings is suggested.
Examination of multi-model ensemble seasonal prediction methods using a simple climate system
NASA Astrophysics Data System (ADS)
Kang, In-Sik; Yoo, Jin Ho
2006-02-01
A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.
Simple Correction of Alar Retraction by Conchal Cartilage Extension Grafts.
Jang, Yong Jun; Kim, Sung Min; Lew, Dae Hyun; Song, Seung Yong
2016-11-01
Alar retraction is a challenging condition in rhinoplasty marked by exaggerated nostril exposure and awkwardness. Although various methods for correcting alar retraction have been introduced, none is without drawbacks. Herein, we report a simple procedure that is both effective and safe for correcting alar retraction using only conchal cartilage grafting. Between August 2007 and August 2009, 18 patients underwent conchal cartilage extension grafting to correct alar retraction. Conchal cartilage extension grafts were fixed to the caudal margins of the lateral crura and covered with vestibular skin advancement flaps. Preoperative and postoperative photographs were reviewed and analyzed. Patient satisfaction was surveyed and categorized into 4 groups (very satisfied, satisfied, moderate, or unsatisfied). According to the survey, 8 patients were very satisfied, 9 were satisfied, and 1 considered the outcome moderate, resulting in satisfaction for most patients. The average distance from the alar rim to the long axis of the nostril was reduced by 1.4 mm (3.6 to 2.2 mm). There were no complications, except in 2 cases with palpable cartilage step-off that resolved without any aesthetic problems. Conchal cartilage alar extension graft is a simple, effective method of correcting alar retraction that can be combined with aesthetic rhinoplasty conveniently, utilizing conchal cartilage, which is the most similar cartilage to alar cartilage, and requiring a lesser volume of cartilage harvest compared to previously devised methods. However, the current procedure lacks efficacy for severe alar retraction and a longer follow-up period may be required to substantiate the enduring efficacy of the current procedure.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Prophylactic Z-plasty - correcting helical rim deformity from wedge excision.
Kim, Peter
2010-09-01
Wedge excision is a popular and well documented surgical method for treating a wide range of skin lesions and cancers of the ear in the general practice setting. In the majority of cases, this is a simple and cosmetically pleasing treatment. However, it may create helical rim deformity. This article describes a simple method of preventing such deformity using prophylactic Z-plasty.
Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo
2005-10-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.
Potential of bias correction for downscaling passive microwave and soil moisture data
USDA-ARS?s Scientific Manuscript database
Passive microwave satellites such as SMOS (Soil Moisture and Ocean Salinity) or SMAP (Soil Moisture Active Passive) observe brightness temperature (TB) and retrieve soil moisture at a spatial resolution greater than most hydrological processes. Bias correction is proposed as a simple method to disag...
Nelson, Jonathan M.; Kinzel, Paul J.; Schmeeckle, Mark Walter; McDonald, Richard R.; Minear, Justin T.
2016-01-01
Noncontact methods for measuring water-surface elevation and velocity in laboratory flumes and rivers are presented with examples. Water-surface elevations are measured using an array of acoustic transducers in the laboratory and using laser scanning in field situations. Water-surface velocities are based on using particle image velocimetry or other machine vision techniques on infrared video of the water surface. Using spatial and temporal averaging, results from these methods provide information that can be used to develop estimates of discharge for flows over known bathymetry. Making such estimates requires relating water-surface velocities to vertically averaged velocities; the methods here use standard relations. To examine where these relations break down, laboratory data for flows over simple bumps of three amplitudes are evaluated. As anticipated, discharges determined from surface information can have large errors where nonhydrostatic effects are large. In addition to investigating and characterizing this potential error in estimating discharge, a simple method for correction of the issue is presented. With a simple correction based on bed gradient along the flow direction, remotely sensed estimates of discharge appear to be viable.
NASA Astrophysics Data System (ADS)
Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.
2012-08-01
A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.
Hong, Young-Joo; Makita, Shuichi; Sugiyama, Satoshi; Yasuno, Yoshiaki
2014-01-01
Polarization mode dispersion (PMD) degrades the performance of Jones-matrix-based polarization-sensitive multifunctional optical coherence tomography (JM-OCT). The problem is specially acute for optically buffered JM-OCT, because the long fiber in the optical buffering module induces a large amount of PMD. This paper aims at presenting a method to correct the effect of PMD in JM-OCT. We first mathematically model the PMD in JM-OCT and then derive a method to correct the PMD. This method is a combination of simple hardware modification and subsequent software correction. The hardware modification is introduction of two polarizers which transform the PMD into global complex modulation of Jones matrix. Subsequently, the software correction demodulates the global modulation. The method is validated with an experimentally obtained point spread function with a mirror sample, as well as by in vivo measurement of a human retina. PMID:25657888
Correcting for batch effects in case-control microbiome studies
Gibbons, Sean M.; Duvallet, Claire
2018-01-01
High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses. PMID:29684016
Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia
2016-06-01
To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.
Proof of concept of a simple computer-assisted technique for correcting bone deformities.
Ma, Burton; Simpson, Amber L; Ellis, Randy E
2007-01-01
We propose a computer-assisted technique for correcting bone deformities using the Ilizarov method. Our technique is an improvement over prior art in that it does not require a tracking system, navigation hardware and software, or intraoperative registration. Instead, we rely on a postoperative CT scan to obtain all of the information necessary to plan the correction and compute a correction schedule for the patient. Our laboratory experiments using plastic phantoms produced deformity corrections accurate to within 3.0 degrees of rotation and 1 mm of lengthening.
A stable second order method for training back propagation networks
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1993-01-01
A simple method for improving the learning rate of the back-propagation algorithm is described. The basis of the method is that approximate second order corrections can be incorporated in the output units. The extended method leads to significant improvements in the convergence rate.
Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi
2011-04-04
We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.
Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi
2011-01-01
We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth‐dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high‐bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L‐shaped bolus. The dose reproducibility, angular dependence and depth‐dose response were evaluated using a 190 MeV proton beam. Depth‐output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose‐weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L‐shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PACS number: 87.56.‐v
Influence of parameter changes to stability behavior of rotors
NASA Technical Reports Server (NTRS)
Fritzen, C. P.; Nordmann, R.
1982-01-01
The occurrence of unstable vibrations in rotating machinery requires corrective measures for improvement of the stability behavior. A simple approximate method is represented to find out the influence of parameter changes to the stability behavior. The method is based on an expansion of the eigenvalues in terms of system parameters. Influence coefficients show the effect of structural modifications. The method first of all was applied to simple nonconservative rotor models. It was approved for an unsymmetric rotor of a test rig.
A simple method for estimating frequency response corrections for eddy covariance systems
W. J. Massman
2000-01-01
A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...
ERIC Educational Resources Information Center
Greenslade, Thomas B., Jr.; Miller, Franklin, Jr.
1981-01-01
Describes method for locating images in simple and complex systems of thin lenses and spherical mirrors. The method helps students to understand differences between real and virtual images. It is helpful in discussing the human eye and the correction of imperfect vision by the use of glasses. (Author/SK)
Jinno, Shunta; Tachibana, Hidenobu; Moriya, Shunsuke; Mizuno, Norifumi; Takahashi, Ryo; Kamima, Tatsuya; Ishibashi, Satoru; Sato, Masanori
2018-05-21
In inhomogeneous media, there is often a large systematic difference in the dose between the conventional Clarkson algorithm (C-Clarkson) for independent calculation verification and the superposition-based algorithms of treatment planning systems (TPSs). These treatment site-dependent differences increase the complexity of the radiotherapy planning secondary check. We developed a simple and effective method of heterogeneity correction integrated with the Clarkson algorithm (L-Clarkson) to account for the effects of heterogeneity in the lateral dimension, and performed a multi-institutional study to evaluate the effectiveness of the method. In the method, a 2D image reconstructed from computed tomography (CT) images is divided according to lines extending from the reference point to the edge of the multileaf collimator (MLC) or jaw collimator for each pie sector, and the radiological path length (RPL) of each line is calculated on the 2D image to obtain a tissue maximum ratio and phantom scatter factor, allowing the dose to be calculated. A total of 261 plans (1237 beams) for conventional breast and lung treatments and lung stereotactic body radiotherapy were collected from four institutions. Disagreements in dose between the on-site TPSs and a verification program using the C-Clarkson and L-Clarkson algorithms were compared. Systematic differences with the L-Clarkson method were within 1% for all sites, while the C-Clarkson method resulted in systematic differences of 1-5%. The L-Clarkson method showed smaller variations. This heterogeneity correction integrated with the Clarkson algorithm would provide a simple evaluation within the range of -5% to +5% for a radiotherapy plan secondary check.
A vibration correction method for free-fall absolute gravimeters
NASA Astrophysics Data System (ADS)
Qian, J.; Wang, G.; Wu, K.; Wang, L. J.
2018-02-01
An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.
Baudrexel, Simon; Nöth, Ulrike; Schüre, Jan-Rüdiger; Deichmann, Ralf
2018-06-01
The variable flip angle method derives T 1 maps from radiofrequency-spoiled gradient-echo data sets, acquired with different flip angles α. Because the method assumes validity of the Ernst equation, insufficient spoiling of transverse magnetization yields errors in T 1 estimation, depending on the chosen radiofrequency-spoiling phase increment (Δϕ). This paper presents a versatile correction method that uses modified flip angles α' to restore the validity of the Ernst equation. Spoiled gradient-echo signals were simulated for three commonly used phase increments Δϕ (50°/117°/150°), different values of α, repetition time (TR), T 1 , and a T 2 of 85 ms. For each parameter combination, α' (for which the Ernst equation yielded the same signal) and a correction factor C Δϕ (α, TR, T 1 ) = α'/α were determined. C Δϕ was found to be independent of T 1 and fitted as polynomial C Δϕ (α, TR), allowing to calculate α' for any protocol using this Δϕ. The accuracy of the correction method for T 2 values deviating from 85 ms was also determined. The method was tested in vitro and in vivo for variable flip angle scans with different acquisition parameters. The technique considerably improved the accuracy of variable flip angle-based T 1 maps in vitro and in vivo. The proposed method allows for a simple correction of insufficient spoiling in gradient-echo data. The required polynomial parameters are supplied for three common Δϕ. Magn Reson Med 79:3082-3092, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Wiegmann, Vincent; Martinez, Cristina Bernal; Baganz, Frank
2018-04-24
Establish a method to indirectly measure evaporation in microwell-based cell culture systems and show that the proposed method allows compensating for liquid losses in fed-batch processes. A correlation between evaporation and the concentration of Na + was found (R 2 = 0.95) when using the 24-well-based miniature bioreactor system (micro-Matrix) for a batch culture with GS-CHO. Based on these results, a method was developed to counteract evaporation with periodic water additions based on measurements of the Na + concentration. Implementation of this method resulted in a reduction of the relative liquid loss after 15 days of a fed-batch cultivation from 36.7 ± 6.7% without volume corrections to 6.9 ± 6.5% with volume corrections. A procedure was established to indirectly measure evaporation through a correlation with the level of Na + ions in solution and deriving a simple formula to account for liquid losses.
Yaseen, Syed Mohammed; Acharya, Ravindranath
2012-01-01
Among the commonly encountered dental irregularities which constitute developing malocclusion is the crossbite. During primary and mixed dentition phase, the crossbite is seen very often and if left untreated during these phases then a simple problem may be transformed into a more complex problem. Different techniques have been used to correct anterior and posterior crossbites in mixed dentition. This case report describes the use of hexa helix, a modified version of quad helix for the management of anterior crossbite and bilateral posterior crossbite in early mixed dentition. Correction was achieved within 15 weeks with no damage to the tooth or the marginal periodontal tissue. The procedure is a simple and effective method for treating anterior and bilateral posterior crossbites simultaneously. PMID:23119188
Removal of ring artifacts in microtomography by characterization of scintillator variations.
Vågberg, William; Larsson, Jakob C; Hertz, Hans M
2017-09-18
Ring artifacts reduce image quality in tomography, and arise from faulty detector calibration. In microtomography, we have identified that ring artifacts can arise due to high-spatial frequency variations in the scintillator thickness. Such variations are normally removed by a flat-field correction. However, as the spectrum changes, e.g. due to beam hardening, the detector response varies non-uniformly introducing ring artifacts that persist after flat-field correction. In this paper, we present a method to correct for ring artifacts from variations in scintillator thickness by using a simple method to characterize the local scintillator response. The method addresses the actual physical cause of the ring artifacts, in contrary to many other ring artifact removal methods which rely only on image post-processing. By applying the technique to an experimental phantom tomography, we show that ring artifacts are strongly reduced compared to only making a flat-field correction.
BASIC: A Simple and Accurate Modular DNA Assembly Method.
Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S
2017-01-01
Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].
Geometric correction of satellite data using curvilinear features and virtual control points
NASA Technical Reports Server (NTRS)
Algazi, V. R.; Ford, G. E.; Meyer, D. I.
1979-01-01
A simple, yet effective procedure for the geometric correction of partial Landsat scenes is described. The procedure is based on the acquisition of actual and virtual control points from the line printer output of enhanced curvilinear features. The accuracy of this method compares favorably with that of the conventional approach in which an interactive image display system is employed.
NASA Astrophysics Data System (ADS)
Tian, D.; Medina, H.
2017-12-01
Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.
Standing on the shoulders of giants: improving medical image segmentation via bias correction.
Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul
2010-01-01
We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.
SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction
NASA Astrophysics Data System (ADS)
Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang
2010-08-01
A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.
NASA Technical Reports Server (NTRS)
Feinberg, L.; Wilson, M.
1993-01-01
To correct for the spherical aberration in the Hubble Space Telescope primary mirror, five anamorphic aspheric mirrors representing correction for three scientific instruments have been fabricated as part of the development of the corrective-optics space telescope axial-replacement instrument (COSTAR). During the acceptance tests of these mirrors at the vendor, a quick and simple method for verifying the asphere surface figure was developed. The technique has been used on three of the aspheres relating to the three instrument prescriptions. Results indicate that the three aspheres are correct to the limited accuracy expected of this test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@riken.jp
We improve the ellipticity of re-smeared artificial image (ERA) method of point-spread function (PSF) correction in a weak lensing shear analysis in order to treat the realistic shape of galaxies and the PSF. This is done by re-smearing the PSF and the observed galaxy image using a re-smearing function (RSF) and allows us to use a new PSF with a simple shape and to correct the PSF effect without any approximations or assumptions. We perform a numerical test to show that the method applied for galaxies and PSF with some complicated shapes can correct the PSF effect with a systematicmore » error of less than 0.1%. We also apply the ERA method for real data of the Abell 1689 cluster to confirm that it is able to detect the systematic weak lensing shear pattern. The ERA method requires less than 0.1 or 1 s to correct the PSF for each object in a numerical test and a real data analysis, respectively.« less
A symmetric multivariate leakage correction for MEG connectomes
Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.
2015-01-01
Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259
NASA Astrophysics Data System (ADS)
Tchitchekova, Deyana S.; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel
2014-07-01
A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ˜3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.
Tchitchekova, Deyana S; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel
2014-07-21
A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ∼3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.
Lee, Chong Suh; Chung, Sung Soo; Park, Se Jun; Kim, Dong Min; Shin, Seong Kee
2014-01-01
This study aimed at deriving a lordosis predictive equation using the pelvic incidence and to establish a simple prediction method of lumbar lordosis for planning lumbar corrective surgery in Asians. Eighty-six asymptomatic volunteers were enrolled in the study. The maximal lumbar lordosis (MLL), lower lumbar lordosis (LLL), pelvic incidence (PI), and sacral slope (SS) were measured. The correlations between the parameters were analyzed using Pearson correlation analysis. Predictive equations of lumbar lordosis through simple regression analysis of the parameters and simple predictive values of lumbar lordosis using PI were derived. The PI strongly correlated with the SS (r = 0.78), and a strong correlation was found between the SS and LLL (r = 0.89), and between the SS and MLL (r = 0.83). Based on these correlations, the predictive equations of lumbar lordosis were found (SS = 0.80 + 0.74 PI (r = 0.78, R (2) = 0.61), LLL = 5.20 + 0.87 SS (r = 0.89, R (2) = 0.80), MLL = 17.41 + 0.96 SS (r = 0.83, R (2) = 0.68). When PI was between 30° to 35°, 40° to 50° and 55° to 60°, the equations predicted that MLL would be PI + 10°, PI + 5° and PI, and LLL would be PI - 5°, PI - 10° and PI - 15°, respectively. This simple calculation method can provide a more appropriate and simpler prediction of lumbar lordosis for Asian populations. The prediction of lumbar lordosis should be used as a reference for surgeons planning to restore the lumbar lordosis in lumbar corrective surgery.
Correcting deformities of the aged earlobe.
Connell, Bruce F
2005-01-01
An earlobe that appears aged or malpositioned can sabotage the results of a well performed face lift. The most frequently noted sign of a naturally aged earlobe is increased length. Improper planning of face lift incisions may also result in disfigurement of the ear. The author suggests simple excisional techniques to correct the aged earlobe, as well as methods to avoid subsequent earlobe distortion when performing a face lift.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B
2018-06-01
To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokhrel, D; Badkul, R; Jiang, H
2014-06-01
Purpose: Lung-SBRT uses hypo-fractionated dose in small non-IMRT fields with tissue-heterogeneity corrected plans. An independent MU verification is mandatory for safe and effective delivery of the treatment plan. This report compares planned MU obtained from iPlan-XVM-Calgorithm against spreadsheet-based hand-calculation using most commonly used simple TMR-based method. Methods: Treatment plans of 15 patients who underwent for MC-based lung-SBRT to 50Gy in 5 fractions for PTV V100%=95% were studied. ITV was delineated on MIP images based on 4D-CT scans. PTVs(ITV+5mm margins) ranged from 10.1- 106.5cc(average=48.6cc). MC-SBRT plans were generated using a combination of non-coplanar conformal arcs/beams using iPlan XVM-Calgorithm (BrainLAB iPlan ver.4.1.2)more » for Novalis-TX consisting of micro-MLCs and 6MV-SRS (1000MU/min) beam. These plans were re-computed using heterogeneity-corrected Pencil-Beam (PB-hete) algorithm without changing any beam parameters, such as MLCs/MUs. Dose-ratio: PB-hete/MC gave beam-by-beam inhomogeneity-correction-factors (ICFs):Individual Correction. For independent-2nd-check, MC-MUs were verified using TMR-based hand-calculation and obtained an average ICF:Average Correction, whereas TMR-based hand-calculation systematically underestimated MC-MUs by ∼5%. Also, first 10 MC-plans were verified with an ion-chamber measurement using homogenous phantom. Results: For both beams/arcs, mean PB-hete dose was systematically overestimated by 5.5±2.6% and mean hand-calculated MU systematic underestimated by 5.5±2.5% compared to XVMC. With individual correction, mean hand-calculated MUs matched with XVMC by - 0.3±1.4%/0.4±1.4 for beams/arcs, respectively. After average 5% correction, hand-calculated MUs matched with XVMC by 0.5±2.5%/0.6±2.0% for beams/arcs, respectively. Smaller dependence on tumor volume(TV)/field size(FS) was also observed. Ion-chamber measurement was within ±3.0%. Conclusion: PB-hete overestimates dose to lung tumor relative to XVMC. XVMC-algorithm is much more-complex and accurate with tissues-heterogeneities. Measurement at machine is time consuming and need extra resources; also direct measurement of dose for heterogeneous treatment plans is not clinically practiced, yet. This simple correction-based method was very helpful for independent-2nd-check of MC-lung-SBRT plans and routinely used in our clinic. A look-up table can be generated to include TV/FS dependence in ICFs.« less
A general method to correct PET data for tissue metabolites using a dual-scan approach.
Gunn, R N; Yap, J T; Wells, P; Osman, S; Price, P; Jones, T; Cunningham, V J
2000-04-01
This article presents and analyses a general method of correcting for the presence of radiolabeled metabolites from a parent radiotracer in tissue during PET scanning. The method is based on a dual-scan approach, i.e., parent scan together with an independent supplementary scan in which the radiolabeled metabolite of interest itself is administered. The method corrects for the presence of systemically derived radiolabeled metabolite delivered to the tissues of interest through the blood. Data from the supplementary scan are analyzed to obtain the tissue impulse response function for the metabolite. The time course of the radiolabeled metabolite in plasma in the parent scan is convolved with its tissue impulse response function to derive a correction term. This is not a simple subtraction technique but 1 that takes account of the different time-activity curves of the radiolabeled metabolite in the 2 scans. The method, its implications, and its limitations are discussed with respect to [11C]thymidine and its principal metabolite 11CO2. The general method, based on a dual-scan approach, can be used to correct for radiolabeled metabolites in tissues of interest during PET scanning. The correction accounts for radiolabeled metabolites that are derived systemically and delivered to the tissues of interest through the blood.
Complex Langevin method: When can it be trusted?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aarts, Gert; Seiler, Erhard; Stamatescu, Ion-Olimpiu
2010-03-01
We analyze to what extent the complex Langevin method, which is in principle capable of solving the so-called sign problems, can be considered as reliable. We give a formal derivation of the correctness and then point out various mathematical loopholes. The detailed study of some simple examples leads to practical suggestions about the application of the method.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
Zweig-rule-satisfying inelastic rescattering in B decays to pseudoscalar mesons
NASA Astrophysics Data System (ADS)
Łach, P.; Żenczykowski, P.
2002-09-01
We discuss all contributions from Zweig-rule-satisfying SU(3)-symmetric inelastic final state interaction (FSI)-induced corrections in B decays to ππ, πK, KK¯, πη(η'), and Kη(η'). We show how all of these FSI corrections lead to a simple redefinition of the amplitudes, permitting the use of a simple diagram-based description, in which, however, weak phases may enter in a modified way. The inclusion of FSI corrections admitted by the present data allows an arbitrary relative phase between the penguin and tree short-distance amplitudes. The FSI-induced error of the method, in which the value of the weak phase γ is to be determined by combining future results from B+,B0d,B0s decays to Kπ, is estimated to be of the order of 5° for γ~50°-60°.
Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.
Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping
2016-01-01
Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.
W. J. Massman
2001-01-01
First, my thanks to Dr. Ullar Rannik for his interest and insights in my recent study of spectral corrections and associated eddy covariance flux loss (Massman, 2000, henceforth denoted by M2000). His comments are important and germane to the attenuation of low frequencies of the turbulent cospectra due to recursive filtering and block averaging. Dr. Rannik addresses...
Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru
NASA Astrophysics Data System (ADS)
Manzanas, R.; Gutiérrez, J. M.
2018-05-01
This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose
Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-01-01
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.
Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-09-12
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.
NASA Astrophysics Data System (ADS)
Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele
2018-05-01
A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M
Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less
Correction factors for self-selection when evaluating screening programmes.
Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H
2016-03-01
In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; van Dam, Hubertus JJ; Pittner, Jiri
2012-03-28
The recently proposed Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] to approximate Multi-Reference Coupled Cluster (MRCC) energies can be commonly applied to any type of MRCC theory based on the Jeziorski-Monkhorst [B. Jeziorski, H.J. Monkhorst, Phys. Rev. A 24, 1668 (1981)] exponential Ansatz. In this letter we report on the performance of a simple USS correction to the Brillouin-Wigner MRCC (BW-MRCC) formalism employing single and double excitations (BW-MRCCSD). It is shown that the resulting formalism (USS-BW-MRCCSD), which uses the manifold of single and double excitations to construct the correction, can be related to a posteriorimore » corrections utilized in routine BW-MRCCSD calculations. In several benchmark calculations we compare the results of the USS-BW-MRCCSD method with results of the BW-MRCCSD approach employing a posteriori corrections and with results obtained with the Full Configuration Interaction (FCI) method.« less
A Simple Method to Control Positive Baseline Trend within Data Nonoverlap
ERIC Educational Resources Information Center
Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.
2014-01-01
Nonoverlap is widely used as a statistical summary of data; however, these analyses rarely correct unwanted positive baseline trend. This article presents and validates the graph rotation for overlap and trend (GROT) technique, a hand calculation method for controlling positive baseline trend within an analysis of data nonoverlap. GROT is…
Rangan, Aaditya V; McGrouther, Caroline C; Kelsoe, John; Schork, Nicholas; Stahl, Eli; Zhu, Qian; Krishnan, Arjun; Yao, Vicky; Troyanskaya, Olga; Bilaloglu, Seda; Raghavan, Preeti; Bergen, Sarah; Jureus, Anders; Landen, Mikael
2018-05-14
A common goal in data-analysis is to sift through a large data-matrix and detect any significant submatrices (i.e., biclusters) that have a low numerical rank. We present a simple algorithm for tackling this biclustering problem. Our algorithm accumulates information about 2-by-2 submatrices (i.e., 'loops') within the data-matrix, and focuses on rows and columns of the data-matrix that participate in an abundance of low-rank loops. We demonstrate, through analysis and numerical-experiments, that this loop-counting method performs well in a variety of scenarios, outperforming simple spectral methods in many situations of interest. Another important feature of our method is that it can easily be modified to account for aspects of experimental design which commonly arise in practice. For example, our algorithm can be modified to correct for controls, categorical- and continuous-covariates, as well as sparsity within the data. We demonstrate these practical features with two examples; the first drawn from gene-expression analysis and the second drawn from a much larger genome-wide-association-study (GWAS).
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
Burgess, C. P.; Holman, R.; Tasinato, G.
2016-01-26
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgess, C. P.; Holman, R.; Tasinato, G.
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
A simple and effective solution to the constrained QM/MM simulations
NASA Astrophysics Data System (ADS)
Takahashi, Hideaki; Kambe, Hiroyuki; Morita, Akihiro
2018-04-01
It is a promising extension of the quantum mechanical/molecular mechanical (QM/MM) approach to incorporate the solvent molecules surrounding the QM solute into the QM region to ensure the adequate description of the electronic polarization of the solute. However, the solvent molecules in the QM region inevitably diffuse into the MM bulk during the QM/MM simulation. In this article, we developed a simple and efficient method, referred to as the "boundary constraint with correction (BCC)," to prevent the diffusion of the solvent water molecules by means of a constraint potential. The point of the BCC method is to compensate the error in a statistical property due to the bias potential by adding a correction term obtained through a set of QM/MM simulations. The BCC method is designed so that the effect of the bias potential completely vanishes when the QM solvent is identical with the MM solvent. Furthermore, the desirable conditions, that is, the continuities of energy and force and the conservations of energy and momentum, are fulfilled in principle. We applied the QM/MM-BCC method to a hydronium ion(H3O+) in aqueous solution to construct the radial distribution function (RDF) of the solvent around the solute. It was demonstrated that the correction term fairly compensated the error and led the RDF in good agreement with the result given by an ab initio molecular dynamics simulation.
Singh, Harpreet; Maurya, Raj Kumar; Thakkar, Surbhi
2016-12-01
Complete transposition of teeth is a rather rare phenomenon. After correction of transposed and malaligned lateral incisor and canine, attainment of appropriate individual antagonistic tooth torque is indispensable, which many orthodontists consider to be a herculean task. Here, a novel method is proposed which demonstrates the use of Spec reverse torquing auxillary as an effective adjunctive aid in conjunction with pre-adjusted edgewise brackets.
Automatic classification of bottles in crates
NASA Astrophysics Data System (ADS)
Aas, Kjersti; Eikvil, Line; Bremnes, Dag; Norbryhn, Andreas
1995-03-01
This paper presents a statistical method for classification of bottles in crates for use in automatic return bottle machines. For the automatons to reimburse the correct deposit, a reliable recognition is important. The images are acquired by a laser range scanner coregistering the distance to the object and the strength of the reflected signal. The objective is to identify the crate and the bottles from a library with a number of legal types. The bottles with significantly different size are separated using quite simple methods, while a more sophisticated recognizer is required to distinguish the more similar bottle types. Good results have been obtained when testing the method developed on bottle types which are difficult to distinguish using simple methods.
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
[Constricted ear therapy with free auricular composite grafts].
Liu, Tun; Zhang, Lian-sheng; Zhuang, Hong-xing; Zhang, Ke-yuan
2004-03-01
A simple and effective therapy for single side constricted ear. Transplanting normal side free composite auricular grafts to constricted ear (15 patients and 15 sides), then lengthening the helix, exposing the scapha, correcting deformity. The 15 patients composite grafts all survived. The helix has been lengthened, the scapha exposed, the normal ear reduced, the constricted ear augmented and two sides ear have become symmetry. This method is simple and results are satisfied.
Correction tool for Active Shape Model based lumbar muscle segmentation.
Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio
2015-08-01
In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.
A case report on the remodelling technique for the earlobe using a soft splint.
Vaiude, Partha N; Anthony, Edwin T; Syed, Mobin; Ilyas, Syed
2008-01-01
Correcting earlobe deformities often presents an aesthetic challenge to the surgeon. The described technique presents a simple, accurate and cost effective method of remodelling soft tissue defects of the earlobe using a soft splint.
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
The Effect of Underwater Imagery Radiometry on 3d Reconstruction and Orthoimagery
NASA Astrophysics Data System (ADS)
Agrafiotis, P.; Drakonakis, G. I.; Georgopoulos, A.; Skarlatos, D.
2017-02-01
The work presented in this paper investigates the effect of the radiometry of the underwater imagery on automating the 3D reconstruction and the produced orthoimagery. Main aim is to investigate whether pre-processing of the underwater imagery improves the 3D reconstruction using automated SfM - MVS software or not. Since the processing of images either separately or in batch is a time-consuming procedure, it is critical to determine the necessity of implementing colour correction and enhancement before the SfM - MVS procedure or directly to the final orthoimage when the orthoimagery is the deliverable. Two different test sites were used to capture imagery ensuring different environmental conditions, depth and complexity. Three different image correction methods are applied: A very simple automated method using Adobe Photoshop, a developed colour correction algorithm using the CLAHE (Zuiderveld, 1994) method and an implementation of the algorithm described in Bianco et al., (2015). The produced point clouds using the initial and the corrected imagery are then being compared and evaluated.
Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir
2018-06-01
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
Simple vertex correction improves G W band energies of bulk and two-dimensional crystals
NASA Astrophysics Data System (ADS)
Schmidt, Per S.; Patrick, Christopher E.; Thygesen, Kristian S.
2017-11-01
The G W self-energy method has long been recognized as the gold standard for quasiparticle (QP) calculations of solids in spite of the fact that the neglect of vertex corrections and the use of a density-functional theory starting point lack rigorous justification. In this work we remedy this situation by including a simple vertex correction that is consistent with a local-density approximation starting point. We analyze the effect of the self-energy by splitting it into short-range and long-range terms which are shown to govern, respectively, the center and size of the band gap. The vertex mainly improves the short-range correlations and therefore has a small effect on the band gap, while it shifts the band gap center up in energy by around 0.5 eV, in good agreement with experiments. Our analysis also explains how the relative importance of short- and long-range interactions in structures of different dimensionality is reflected in their QP energies. Inclusion of the vertex comes at practically no extra computational cost and even improves the basis set convergence compared to G W . Taken together, the method provides an efficient and rigorous improvement over the G W approximation.
A computational framework for automation of point defect calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
Radiometric calibration of Landsat Thematic Mapper multispectral images
Chavez, P.S.
1989-01-01
A main problem encountered in radiometric calibration of satellite image data is correcting for atmospheric effects. Without this correction, an image digital number (DN) cannot be converted to a surface reflectance value. In this paper the accuracy of a calibration procedure, which includes a correction for atmospheric scattering, is tested. Two simple methods, a stand-alone and an in situ sky radiance measurement technique, were used to derive the HAZE DN values for each of the six reflectance Thematic Mapper (TM) bands. The DNs of two Landsat TM images of Phoenix, Arizona were converted to surface reflectances. -from Author
A computational framework for automation of point defect calculations
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...
2017-01-13
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
Dobie, Robert A; Wojcik, Nancy C
2015-01-01
Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. PMID:26169804
NASA Astrophysics Data System (ADS)
Hubert, Maxime; Pacureanu, Alexandra; Guilloud, Cyril; Yang, Yang; da Silva, Julio C.; Laurencin, Jerome; Lefebvre-Joud, Florence; Cloetens, Peter
2018-05-01
In X-ray tomography, ring-shaped artifacts present in the reconstructed slices are an inherent problem degrading the global image quality and hindering the extraction of quantitative information. To overcome this issue, we propose a strategy for suppression of ring artifacts originating from the coherent mixing of the incident wave and the object. We discuss the limits of validity of the empty beam correction in the framework of a simple formalism. We then deduce a correction method based on two-dimensional random sample displacement, with minimal cost in terms of spatial resolution, acquisition, and processing time. The method is demonstrated on bone tissue and on a hydrogen electrode of a ceramic-metallic solid oxide cell. Compared to the standard empty beam correction, we obtain high quality nanotomography images revealing detailed object features. The resulting absence of artifacts allows straightforward segmentation and posterior quantification of the data.
Singh, Harpreet; Thakkar, Surbhi
2016-01-01
Complete transposition of teeth is a rather rare phenomenon. After correction of transposed and malaligned lateral incisor and canine, attainment of appropriate individual antagonistic tooth torque is indispensable, which many orthodontists consider to be a herculean task. Here, a novel method is proposed which demonstrates the use of Spec reverse torquing auxillary as an effective adjunctive aid in conjunction with pre-adjusted edgewise brackets. PMID:28209017
Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...
2017-08-12
For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Liu, Xiaowen; Li, Tingwen
For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less
Calibrating the ECCO ocean general circulation model using Green's functions
NASA Technical Reports Server (NTRS)
Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.
2002-01-01
Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.
Slant correction for handwritten English documents
NASA Astrophysics Data System (ADS)
Shridhar, Malayappan; Kimura, Fumitaka; Ding, Yimei; Miller, John W. V.
2004-12-01
Optical character recognition of machine-printed documents is an effective means for extracting textural material. While the level of effectiveness for handwritten documents is much poorer, progress is being made in more constrained applications such as personal checks and postal addresses. In these applications a series of steps is performed for recognition beginning with removal of skew and slant. Slant is a characteristic unique to the writer and varies from writer to writer in which characters are tilted some amount from vertical. The second attribute is the skew that arises from the inability of the writer to write on a horizontal line. Several methods have been proposed and discussed for average slant estimation and correction in the earlier papers. However, analysis of many handwritten documents reveals that slant is a local property and slant varies even within a word. The use of an average slant for the entire word often results in overestimation or underestimation of the local slant. This paper describes three methods for local slant estimation, namely the simple iterative method, high-speed iterative method, and the 8-directional chain code method. The experimental results show that the proposed methods can estimate and correct local slant more effectively than the average slant correction.
Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C
2011-09-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.
Simple measurement of lenticular lens quality for autostereoscopic displays
NASA Astrophysics Data System (ADS)
Gray, Stuart; Boudreau, Robert A.
2013-03-01
Lenticular lens based autostereoscopic 3D displays are finding many applications in digital signage and consumer electronics devices. A high quality 3D viewing experience requires the lenticular lens be properly aligned with the pixels on the display device so that each eye views the correct image. This work presents a simple and novel method for rapidly assessing the quality of a lenticular lens to be used in autostereoscopic displays. Errors in lenticular alignment across the entire display are easily observed with a simple test pattern where adjacent views are programmed to display different colors.
Jung, Eun-hong; Jang, Seok-heun; Lee, Jae-won
2011-01-01
Purpose The aim of this study was to categorize concealed penis and buried penis by preoperative physical examination including the manual prepubic compression test and to describe a simple surgical technique to correct buried penis that was based on surgical experience and comprehension of the anatomical components. Materials and Methods From March 2007 to November 2010, 17 patients were diagnosed with buried penis after differentiation of this condition from concealed penis. The described surgical technique consisted of a minimal incision and simple fixation of the penile shaft skin and superficial fascia to the prepubic deep fascia, without degloving the penile skin. Results The mean age of the patients was 10.2 years, ranging from 8 years to 15 years. The median follow-up was 19 months (range, 5 to 49 months). The mean penile lengths were 1.8 cm (range, 1.1 to 2.5 cm) preoperatively and 4.5 cm (range, 3.3 to 5.8 cm) postoperatively. The median difference between preoperative and postoperative penile lengths was 2.7 cm (range, 2.1 to 3.9 cm). There were no serious intra- or postoperative complications. Conclusions With the simple anchoring of the penopubic skin to the prepubic deep fascia, we obtained successful subjective and objective outcomes without complications. We suggest that this is a promising surgical method for selected patients with buried penis. PMID:22195270
Application of the backward extrapolation method to pulsed neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Application of the backward extrapolation method to pulsed neutron sources
Talamo, Alberto; Gohar, Yousry
2017-09-23
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
An efficient higher order family of root finders
NASA Astrophysics Data System (ADS)
Petkovic, Ljiljana D.; Rancic, Lidija; Petkovic, Miodrag S.
2008-06-01
A one parameter family of iterative methods for the simultaneous approximation of simple complex zeros of a polynomial, based on a cubically convergent Hansen-Patrick's family, is studied. We show that the convergence of the basic family of the fourth order can be increased to five and six using Newton's and Halley's corrections, respectively. Since these corrections use the already calculated values, the computational efficiency of the accelerated methods is significantly increased. Further acceleration is achieved by applying the Gauss-Seidel approach (single-step mode). One of the most important problems in solving nonlinear equations, the construction of initial conditions which provide both the guaranteed and fast convergence, is considered for the proposed accelerated family. These conditions are computationally verifiable; they depend only on the polynomial coefficients, its degree and initial approximations, which is of practical importance. Some modifications of the considered family, providing the computation of multiple zeros of polynomials and simple zeros of a wide class of analytic functions, are also studied. Numerical examples demonstrate the convergence properties of the presented family of root-finding methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eken Tuna, Kevin Mayeda, Abraham Hofstetter, Rengin Gok, Gonca Orgulu, Niyazi Turkelli
A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, they found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction. After calibrating the stations ISP, ISKB and MALT for local and regional distances, single-station moment-magnitude estimates (M{submore » W}) derived from the coda spectra were in excellent agreement with those determined from multistation waveform modeling inversions, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub W} estimates to significantly smaller events which could not otherwise be waveform modeled. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less
Bashi, Ramin Haj Zargar; Baghdadi, Taghi; Shirazi, Mehdi Ramezan; Abdi, Reza; Aslani, Hossein
2016-03-01
Congenital talipes equinovarus may be the most common congenital orthopedic condition requiring treatment. Nonoperative treatment including different methods is generally accepted as the first step in the deformity correction. Ignacio Ponseti introduced his nonsurgical approach to the treatment of clubfoot in the early 1940s. The method is reportedly successful in treating clubfoot in patients up to 9 years of age. However, whether age at the beginning of treatment affects the rate of effective correction and relapse is unknown. We have applied the Ponseti method successfully with some modifications for 11 patients with a mean age of 11.2 years (range, 6 to 19 years) with neglected and untreated clubbed feet. The mean follow-up was 15 months (12 to 36 months). Correction was achieved with a mean of nine casts (six to 13). Clinically, 17 out of 18 feet (94.4%) were considered to achieve a good result with no need for further surgery. The application of this method of treatment is very simple and also cheap in developing countries with limited financial and social resources for health service. To the best of the authors' knowledge, such a modified method as a correction method for clubfoot in older children and adolescents has not been applied previously for neglected clubfeet in older children in the literature.
Distortion Correction of OCT Images of the Crystalline Lens: GRIN Approach
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-01-01
Purpose To propose a method to correct Optical Coherence Tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Methods 2-D images of 9 human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared to the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley and lens thickness shifts from the nominal data. Results Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface, in terms of RMS and peak values, with errors less than 6μm and 13μm respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8μm. Conclusions The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in 2-D, it is expected that 3-D imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations. PMID:22466105
Removing flicker based on sparse color correspondences in old film restoration
NASA Astrophysics Data System (ADS)
Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran
2018-04-01
In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.
Windschuh, Johannes; Siero, Jeroen C.W.; Zaiss, Moritz; Luijten, Peter R.; Klomp, Dennis W.J.; Hoogduin, Hans
2017-01-01
High field MRI is beneficial for chemical exchange saturation transfer (CEST) in terms of high SNR, CNR, and chemical shift dispersion. These advantages may, however, be counter‐balanced by the increased transmit field inhomogeneity normally associated with high field MRI. The relatively high sensitivity of the CEST contrast to B 1 inhomogeneity necessitates the development of correction methods, which is essential for the clinical translation of CEST. In this work, two B 1 correction algorithms for the most studied CEST effects, amide‐CEST and nuclear Overhauser enhancement (NOE), were analyzed. Both methods rely on fitting the multi‐pool Bloch‐McConnell equations to the densely sampled CEST spectra. In the first method, the correction is achieved by using a linear B 1 correction of the calculated amide and NOE CEST effects. The second method uses the Bloch‐McConnell fit parameters and the desired B 1 amplitude to recalculate the CEST spectra, followed by the calculation of B 1‐corrected amide and NOE CEST effects. Both algorithms were systematically studied in Bloch‐McConnell equations and in human data, and compared with the earlier proposed ideal interpolation‐based B 1 correction method. In the low B 1 regime of 0.15–0.50 μT (average power), a simple linear model was sufficient to mitigate B 1 inhomogeneity effects on a par with the interpolation B 1 correction, as demonstrated by a reduced correlation of the CEST contrast with B 1 in both the simulations and the experiments. PMID:28111824
Extracting muon momentum scale corrections for hadron collider experiments
NASA Astrophysics Data System (ADS)
Bodek, A.; van Dyne, A.; Han, J. Y.; Sakumoto, W.; Strelnikov, A.
2012-10-01
We present a simple method for the extraction of corrections for bias in the measurement of the momentum of muons in hadron collider experiments. Such bias can originate from a variety of sources such as detector misalignment, software reconstruction bias, and uncertainties in the magnetic field. The two step method uses the mean <1/p^{μ}T rangle for muons from Z→ μμ decays to determine the momentum scale corrections in bins of charge, η and ϕ. In the second step, the corrections are tuned by using the average invariant mass < MZ_{μμ }rangle of Z→ μμ events in the same bins of charge η and ϕ. The forward-backward asymmetry of Z/ γ ∗→ μμ pairs as a function of μ + μ - mass, and the ϕ distribution of Z bosons in the Collins-Soper frame are used to ascertain that the corrections remove the bias in the momentum measurements for positive versus negatively charged muons. By taking the sum and difference of the momentum scale corrections for positive and negative muons, we isolate additive corrections to 1/p^{μ}T that may originate from misalignments and multiplicative corrections that may originate from mis-modeling of the magnetic field (∫ Bṡ d L). This method has recently been used in the CDF experiment at Fermilab and in the CMS experiment at the Large Hadron Collider at CERN.
NASA Astrophysics Data System (ADS)
Wang, Kunpeng; Tan, Handong; Zhang, Zhiyong; Li, Zhiqiang; Cao, Meng
2017-05-01
Resistivity anisotropy and full-tensor controlled-source audio-frequency magnetotellurics (CSAMT) have gradually become hot research topics. However, much of the current anisotropy research for tensor CSAMT only focuses on the one-dimensional (1D) solution. As the subsurface is rarely 1D, it is necessary to study three-dimensional (3D) model response. The staggered-grid finite difference method is an effective simulation method for 3D electromagnetic forward modelling. Previous studies have suggested using the divergence correction to constrain the iterative process when using a staggered-grid finite difference model so as to accelerate the 3D forward speed and enhance the computational accuracy. However, the traditional divergence correction method was developed assuming an isotropic medium. This paper improves the traditional isotropic divergence correction method and derivation process to meet the tensor CSAMT requirements for anisotropy using the volume integral of the divergence equation. This method is more intuitive, enabling a simple derivation of a discrete equation and then calculation of coefficients related to the anisotropic divergence correction equation. We validate the result of our 3D computational results by comparing them to the results computed using an anisotropic, controlled-source 2.5D program. The 3D resistivity anisotropy model allows us to evaluate the consequences of using the divergence correction at different frequencies and for two orthogonal finite length sources. Our results show that the divergence correction plays an important role in 3D tensor CSAMT resistivity anisotropy research and offers a solid foundation for inversion of CSAMT data collected over an anisotropic body.
A simple chromatographic method for purification of egg lecithin.
Nielsen, J R
1980-06-01
Egg lecithin was purified from the CdCl2-lecithin complex by column chromatography on Alumina. The yield from 5 eggs was 2.8 g. The purified lecithin had correct chemical values for pure lecithin and a fatty acid composition similar to lecithin prepared by other methods. The method probably can be adapted for purification of other lipids containing the phosphocholine moiety and for purification of synthetic lecithin.
Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay
2013-01-01
Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eken, T; Mayeda, K; Hofstetter, A
A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, we found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction for 10 narrow frequency bands ranging between 0.02 to 2.0 Hz. For higher frequencies however, 2-D pathmore » corrections will be necessary and will be the subject of a future study. After calibrating the stations ISP, ISKB, and MALT for local and regional distances, single-station moment-magnitude estimates (M{sub w}) derived from the coda spectra were in excellent agreement with those determined from multi-station waveform modeling inversions of long-period data, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub w} estimates to significantly smaller events which could not otherwise be waveform modeled due to poor signal-to-noise ratio at long periods and sparse station coverage. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less
NASA Astrophysics Data System (ADS)
Alessandri, S.; Monti, G.
2008-05-01
A simple procedure is proposed for the assessment of reinforced rectangular concrete columns under combined biaxial bending and axial loads and for the design of a correct amount of FRP-strengthening for underdesigned concrete sections. Approximate closed-form equations are developed based on the load contour method originally proposed by Bresler for reinforced concrete sections. The 3D failure surface is approximated along its contours, at a constant axial load, by means of equations given as the sum of the acting/resisting moment ratio in the directions of principal axes of the sections, raised to a power depending on the axial load, the steel reinforcement ratio, and the section shape. The method is extended to FRP-strengthened sections. Moreover, to make it possible to apply the load contour method in a more practical way, simple closed-form equations are developed for rectangular reinforced concrete sections with a two-way steel reinforcement and FRP strengthenings on each side. A comparison between the approach proposed and the fiber method (which is considered exact) shows that the simplified equations correctly represent the section interaction diagram.
Estimation of Skidding Offered by Ackermann Mechanism
NASA Astrophysics Data System (ADS)
Rao, Are Padma; Venkatachalam, Rapur
2016-04-01
Steering for a four wheeler is being provided by Ackermann mechanism. Though it cannot always provide correct steering conditions, it is very popular because of its simple nature. A correct steering would avoid skidding of the tires, and thereby enhance their lives as the wear of the tires is reduced. In this paper it is intended to analyze Ackermann mechanism for its performance. A method of estimating skidding due to improper steering is proposed. Two parameters are identified using which the length of skidding can be estimated.
NASA Astrophysics Data System (ADS)
Zárate, Francisco; Cornejo, Alejandro; Oñate, Eugenio
2018-07-01
This paper extends to three dimensions (3D), the computational technique developed by the authors in 2D for predicting the onset and evolution of fracture in a finite element mesh in a simple manner based on combining the finite element method and the discrete element method (DEM) approach (Zárate and Oñate in Comput Part Mech 2(3):301-314, 2015). Once a crack is detected at an element edge, discrete elements are generated at the adjacent element vertexes and a simple DEM mechanism is considered in order to follow the evolution of the crack. The combination of the DEM with simple four-noded linear tetrahedron elements correctly captures the onset of fracture and its evolution, as shown in several 3D examples of application.
Distortion correction of OCT images of the crystalline lens: gradient index approach.
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-05-01
To propose a method to correct optical coherence tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Two-dimensional images of nine human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared with the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley, and lens thickness shifts from the nominal data. Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface in terms of root mean square and peak values, with errors <6 and 13 μm, respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8 μm. The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in two dimension, it is expected that three-dimensional imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations.
A simple randomisation procedure for validating discriminant analysis: a methodological note.
Wastell, D G
1987-04-01
Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.
Investigation of the ionospheric Faraday rotation for use in orbit corrections
NASA Technical Reports Server (NTRS)
Llewellyn, S. K.; Bent, R. B.; Nesterczuk, G.
1974-01-01
The possibility of mapping the Faraday factors on a worldwide basis was examined as a simple method of representing the conversion factors for any possible user. However, this does not seem feasible. The complex relationship between the true magnetic coordinates and the geographic latitude, longitude, and azimuth angles eliminates the possibility of setting up some simple tables that would yield worldwide results of sufficient accuracy. Tabular results for specific stations can easily be produced or could be represented in graphic form.
Average luminosity distance in inhomogeneous universes
NASA Astrophysics Data System (ADS)
Kostov, Valentin Angelov
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Two flaps and Z-plasty technique for correction of longitudinal ear lobe cleft.
Lee, Paik-Kwon; Ju, Hong-Sil; Rhie, Jong-Won; Ahn, Sang-Tae
2005-06-01
Various surgical techniques have been reported for the correction of congenital ear lobe deformities. Our method, the two-flaps-and-Z-plasty technique, for correcting the longitudinal ear lobe cleft is presented. This technique is simple and easy to perform. It enables us to keep the bulkiness of the ear lobe with minimal tissue sacrifice, and to make a shorter operation scar. The small Z-plasty at the free ear lobe margin avoids notching deformity and makes the shape of the ear lobe smoother. The result is satisfactory in terms of matching the contralateral normal ear lobe in shape and symmetry.
Pandit, Jaideep J; Tavare, Aniket
2011-07-01
It is important that a surgical list is planned to utilise as much of the scheduled time as possible while not over-running, because this can lead to cancellation of operations. We wished to assess whether, theoretically, the known duration of individual operations could be used quantitatively to predict the likely duration of the operating list. In a university hospital setting, we first assessed the extent to which the current ad-hoc method of operating list planning was able to match the scheduled operating list times for 153 consecutive historical lists. Using receiver operating curve analysis, we assessed the ability of an alternative method to predict operating list duration for the same operating lists. This method uses a simple formula: the sum of individual operation times and a pooled standard deviation of these times. We used the operating list duration estimated from this formula to generate a probability that the operating list would finish within its scheduled time. Finally, we applied the simple formula prospectively to 150 operating lists, 'shadowing' the current ad-hoc method, to confirm the predictive ability of the formula. The ad-hoc method was very poor at planning: 50% of historical operating lists were under-booked and 37% over-booked. In contrast, the simple formula predicted the correct outcome (under-run or over-run) for 76% of these operating lists. The calculated probability that a planned series of operations will over-run or under-run was found useful in developing an algorithm to adjust the planned cases optimally. In the prospective series, 65% of operating lists were over-booked and 10% were under-booked. The formula predicted the correct outcome for 84% of operating lists. A simple quantitative method of estimating operating list duration for a series of operations leads to an algorithm (readily created on an Excel spreadsheet, http://links.lww.com/EJA/A19) that can potentially improve operating list planning.
Salmingo, Remel A; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu
2012-05-01
Scoliosis is defined as a spinal pathology characterized as a three-dimensional deformity of the spine combined with vertebral rotation. Treatment for severe scoliosis is achieved when the scoliotic spine is surgically corrected and fixed using implanted rods and screws. Several studies performed biomechanical modeling and corrective forces measurements of scoliosis correction. These studies were able to predict the clinical outcome and measured the corrective forces acting on screws, however, they were not able to measure the intraoperative three-dimensional geometry of the spinal rod. In effect, the results of biomechanical modeling might not be so realistic and the corrective forces during the surgical correction procedure were intra-operatively difficult to measure. Projective geometry has been shown to be successful in the reconstruction of a three-dimensional structure using a series of images obtained from different views. In this study, we propose a new method to measure the three-dimensional geometry of an implant rod using two cameras. The reconstruction method requires only a few parameters, the included angle θ between the two cameras, the actual length of the rod in mm, and the location of points for curve fitting. The implant rod utilized in spine surgery was used to evaluate the accuracy of the current method. The three-dimensional geometry of the rod was measured from the image obtained by a scanner and compared to the proposed method using two cameras. The mean error in the reconstruction measurements ranged from 0.32 to 0.45 mm. The method presented here demonstrated the possibility of intra-operatively measuring the three-dimensional geometry of spinal rod. The proposed method could be used in surgical procedures to better understand the biomechanics of scoliosis correction through real-time measurement of three-dimensional implant rod geometry in vivo.
Luce, T. C.; Petty, C. C.; Meyer, W. H.; ...
2016-11-02
An approximate method to correct the motional Stark effect (MSE) spectroscopy for the effects of intrinsic plasma electric fields has been developed. The motivation for using an approximate method is to incorporate electric field effects for between-pulse or real-time analysis of the current density or safety factor profile. The toroidal velocity term in the momentum balance equation is normally the dominant contribution to the electric field orthogonal to the flux surface over most of the plasma. When this approximation is valid, the correction to the MSE data can be included in a form like that used when electric field effectsmore » are neglected. This allows measurements of the toroidal velocity to be integrated into the interpretation of the MSE polarization angles without changing how the data is treated in existing codes. In some cases, such as the DIII-D system, the correction is especially simple, due to the details of the neutral beam and MSE viewing geometry. The correction method is compared using DIII-D data in a variety of plasma conditions to analysis that assumes no radial electric field is present and to analysis that uses the standard correction method, which involves significant human intervention for profile fitting. The comparison shows that the new correction method is close to the standard one, and in all cases appears to offer a better result than use of the uncorrected data. Lastly, the method has been integrated into the standard DIII-D equilibrium reconstruction code in use for analysis between plasma pulses and is sufficiently fast that it will be implemented in real-time equilibrium analysis for control applications.« less
Yu, Xiaobo; Yang, Qinghua; Jiang, Haiyue; Pan, Bo; Zhao, Yanyong; Lin, Lin
2017-11-01
Cryptotia is a common congenital ear deformity in Asian populations. In cryptotia, a portion of the upper ear is hidden and fixed in a pocket of the skin of the mastoid. Here we describe our method for cryptotia correction by using an ultra-delicate split-thickness skin graft in continuity with a full-thickness skin rotation flap. We developed a new method for correcting cryptotia by using an ultra-delicate split-thickness skin graft in continuity with a full-thickness skin rotation flap. Following ear release, the full-thickness skin rotation flap is rotated into the defect, and the donor site is covered with an ultra-delicate split-thickness skin graft raised in continuity with the flap. All patients exhibited satisfactory release of cryptotia. No cases involved partial or total flap necrosis, and post-operative outcomes using this new technique for cryptotia correction have been more than satisfactory. Our method of using an ultra-delicate split-thickness skin graft in continuity with a full-thickness skin rotation flap to correct cryptotia is simple and reliable. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Simple automatic strategy for background drift correction in chromatographic data analysis.
Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin
2016-06-03
Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.
Shirazi, Mehdi; Ariafar, Ali; Babaei, Amir Hossein; Ashrafzadeh, Abdosamad; Adib, Ali
2016-11-01
Urethrocutaneous fistula (UCF) is the most prevalent complication after hypospadias repair surgery. Many methods have been developed for UCF correction, and the best technique for UCF repair is determined based on the size, location, and number of fistulas, as well as the status of the surrounding skin. In this study, we introduced and evaluated a simple method for UCF correction after tubularized incised plate (TIP) repair. This clinical study was conducted on children with UCFs ≤ 4 mm that developed after TIP surgery for hypospadias repair. The skin was incised around the fistula and the tract was released from the surrounding tissues and the dartos fascia, then ligated with 5 - 0 polydioxanone (PDS) sutures. The dartos fascia, as the second layer, was covered on the fistula tract with PDS thread (gauge 5 - 0) by the continuous suture method. The skin was closed with 6 - 0 Vicryl sutures. After six months of follow-up, surgical outcomes were evaluated based on fistula relapse and other complications. After six months, relapse occurred in only one patient, a six-year-old boy with a single 4-mm distal opening, who had undergone no previous fistula repairs. Therefore, in 97.5% of the cases, relapse was non-existent. Other complications, such as urethral stenosis, intraurethral obstruction, and epidermal inclusion cysts, were not seen in the other patients during the six-month follow-up period. This repair method, which is simple, rapid, and easily learned, is highly applicable, with a high success rate for the closure of UCFs measuring up to 4 mm in any location.
Further evidence for the increased power of LOD scores compared with nonparametric methods.
Durner, M; Vieland, V J; Greenberg, D A
1999-01-01
In genetic analysis of diseases in which the underlying model is unknown, "model free" methods-such as affected sib pair (ASP) tests-are often preferred over LOD-score methods, although LOD-score methods under the correct or even approximately correct model are more powerful than ASP tests. However, there might be circumstances in which nonparametric methods will outperform LOD-score methods. Recently, Dizier et al. reported that, in some complex two-locus (2L) models, LOD-score methods with segregation analysis-derived parameters had less power to detect linkage than ASP tests. We investigated whether these particular models, in fact, represent a situation that ASP tests are more powerful than LOD scores. We simulated data according to the parameters specified by Dizier et al. and analyzed the data by using a (a) single locus (SL) LOD-score analysis performed twice, under a simple dominant and a recessive mode of inheritance (MOI), (b) ASP methods, and (c) nonparametric linkage (NPL) analysis. We show that SL analysis performed twice and corrected for the type I-error increase due to multiple testing yields almost as much linkage information as does an analysis under the correct 2L model and is more powerful than either the ASP method or the NPL method. We demonstrate that, even for complex genetic models, the most important condition for linkage analysis is that the assumed MOI at the disease locus being tested is approximately correct, not that the inheritance of the disease per se is correctly specified. In the analysis by Dizier et al., segregation analysis led to estimates of dominance parameters that were grossly misspecified for the locus tested in those models in which ASP tests appeared to be more powerful than LOD-score analyses.
NASA Astrophysics Data System (ADS)
Kobinata, Hideo; Yamashita, Hiroshi; Nomura, Eiichi; Nakajima, Ken; Kuroki, Yukinori
1998-12-01
A new method for proximity effect correction, suitable for large-field electron-beam (EB) projection lithography with high accelerating voltage, such as SCALPEL and PREVAIL in the case where a stencil mask is used, is discussed. In this lithography, a large-field is exposed by the same dose, and thus, the dose modification method, which is used in the variable-shaped beam and the cell projection methods, cannot be used in this case. In this study, we report on development of a new proximity effect correction method which uses a pattern modified stencil mask suitable for high accelerating voltage and large-field EB projection lithography. In order to obtain the mask bias value, we have investigated linewidth reduction, due to the proximity effect, in the peripheral memory cell area, and found that it could be expressed by a simple function and all the correction parameters were easily determined from only the mask pattern data. The proximity effect for the peripheral array pattern could also be corrected by considering the pattern density. Calculated linewidth deviation was 3% or less for a 0.07-µm-L/S memory cell pattern and 5% or less for a 0.14-µm-line and 0.42-µm-space peripheral array pattern, simultaneously.
Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M
2016-03-01
Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.
Multiple testing corrections in quantitative proteomics: A useful but blunt tool.
Pascovici, Dana; Handler, David C L; Wu, Jemma X; Haynes, Paul A
2016-09-01
Multiple testing corrections are a useful tool for restricting the FDR, but can be blunt in the context of low power, as we demonstrate by a series of simple simulations. Unfortunately, in proteomics experiments low power can be common, driven by proteomics-specific issues like small effects due to ratio compression, and few replicates due to reagent high cost, instrument time availability and other issues; in such situations, most multiple testing corrections methods, if used with conventional thresholds, will fail to detect any true positives even when many exist. In this low power, medium scale situation, other methods such as effect size considerations or peptide-level calculations may be a more effective option, even if they do not offer the same theoretical guarantee of a low FDR. Thus, we aim to highlight in this article that proteomics presents some specific challenges to the standard multiple testing corrections methods, which should be employed as a useful tool but not be regarded as a required rubber stamp. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas
1992-07-01
Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.
Energy shadowing correction of ultrasonic pulse-echo records by digital signal processing
NASA Technical Reports Server (NTRS)
Kishonio, D.; Heyman, J. S.
1985-01-01
A numerical algorithm is described that enables the correction of energy shadowing during the ultrasonic testing of bulk materials. In the conventional method, an ultrasonic transducer transmits sound waves into a material that is immersed in water so that discontinuities such as defects can be revealed when the waves are reflected and then detected and displayed graphically. Since a defect that lies behind another defect is shadowed in that it receives less energy, the conventional method has a major drawback. The algorithm normalizes the energy of the incoming wave by measuring the energy of the waves reflected off the water/air interface. The algorithm is fast and simple enough to be adopted for real time applications in industry. Images of material defects with the shadowing corrections permit more quantitative interpretation of the material state.
NASA Technical Reports Server (NTRS)
Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas
1986-01-01
The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.
Allodji, Rodrigue S; Schwartz, Boris; Diallo, Ibrahima; Agbovon, Césaire; Laurier, Dominique; de Vathaire, Florent
2015-08-01
Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10(-4) person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10(-4) person-years/Gy(2)) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards.
A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.; Irvine, Tom
2013-01-01
A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.
Ground-state energies of simple metals
NASA Technical Reports Server (NTRS)
Hammerberg, J.; Ashcroft, N. W.
1974-01-01
A structural expansion for the static ground-state energy of a simple metal is derived. Two methods are presented, one an approach based on single-particle band structure which treats the electron gas as a nonlinear dielectric, the other a more general many-particle analysis using finite-temperature perturbation theory. The two methods are compared, and it is shown in detail how band-structure effects, Fermi-surface distortions, and chemical-potential shifts affect the total energy. These are of special interest in corrections to the total energy beyond third order in the electron-ion interaction and hence to systems where differences in energies for various crystal structures are exceptionally small. Preliminary calculations using these methods for the zero-temperature thermodynamic functions of atomic hydrogen are reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, D; Gach, H; Li, H
Purpose: The daily treatment MRIs acquired on MR-IGRT systems, like diagnostic MRIs, suffer from intensity inhomogeneity issue, associated with B1 and B0 inhomogeneities. An improved homomorphic unsharp mask (HUM) filtering method, automatic and robust body segmentation, and imaging field-of-view (FOV) detection methods were developed to compute the multiplicative slow-varying correction field and correct the intensity inhomogeneity. The goal is to improve and normalize the voxel intensity so that the images could be processed more accurately by quantitative methods (e.g., segmentation and registration) that require consistent image voxel intensity values. Methods: HUM methods have been widely used for years. A bodymore » mask is required, otherwise the body surface in the corrected image would be incorrectly bright due to the sudden intensity transition at the body surface. In this study, we developed an improved HUM-based correction method that includes three main components: 1) Robust body segmentation on the normalized image gradient map, 2) Robust FOV detection (needed for body segmentation) using region growing and morphologic filters, and 3) An effective implementation of HUM using repeated Gaussian convolution. Results: The proposed method was successfully tested on patient images of common anatomical sites (H/N, lung, abdomen and pelvis). Initial qualitative comparisons showed that this improved HUM method outperformed three recently published algorithms (FCM, LEMS, MICO) in both computation speed (by 50+ times) and robustness (in intermediate to severe inhomogeneity situations). Currently implemented in MATLAB, it takes 20 to 25 seconds to process a 3D MRI volume. Conclusion: Compared to more sophisticated MRI inhomogeneity correction algorithms, the improved HUM method is simple and effective. The inhomogeneity correction, body mask, and FOV detection methods developed in this study would be useful as preprocessing tools for many MRI-related research and clinical applications in radiotherapy. Authors have received research grants from ViewRay and Varian.« less
Tanaka, T; Inui, O; Dohi, N; Okada, N; Okada, H; Kikuchi, Y
2001-07-01
Today, many nucleic acid enzymes are used in gene therapy and gene regulations. However, no simple assay methods to evaluate enzymatic activities, with which we judge the enzyme design, have been reported. Here, we propose a new simple competition assay for nucleic acid enzymes of different types to evaluate the cleaving efficiency of a target RNA molecule, of which the recognition sites are different but overlapped. Two nucleic acid enzymes were added to one tube to make a competition of these two enzymes for one substrate. The assay was used on two ribozymes, hammerhead ribozyme and hairpin ribozyme, and a DNA-enzyme. We found that this assay method is capable of application to those enzymes, as a powerful tool for the selection and designing of RNA-cleaving enzymes.
NASA Astrophysics Data System (ADS)
Hwang, Ui-Jung; Shin, Dongho; Lee, Se Byeong; Lim, Young Kyung; Jeong, Jong Hwi; Kim, Hak Soo; Kim, Ki Hwan
2018-05-01
To apply a scintillating fiber dosimetry system to measure the range of a proton therapy beam, a new method was proposed to correct for the quenching effect on measuring an spread out Bragg peak (SOBP) proton beam whose range is modulated by a range modulator wheel. The scintillating fiber dosimetry system was composed of a plastic scintillating fiber (BCF-12), optical fiber (SH 2001), photo multiplier tube (H7546), and data acquisition system (PXI6221 and SCC68). The proton beam was generated by a cyclotron (Proteus-235) in the National Cancer Center in Korea. It operated in the double-scattering mode and the spread out of the Bragg peak was achieved by a spinning range modulation wheel. Bragg peak beams and SOBP beams of various ranges were measured, corrected, and compared to the ion chamber data. For the Bragg peak beam, quenching equation was used to correct the quenching effect. On the proposed process of correcting SOBP beams, the measured data using a scintillating fiber were separated by the Bragg peaks that the SOBP beam contained, and then recomposed again to reconstruct an SOBP after correcting for each Bragg peak. The measured depth-dose curve for the single Bragg peak beam was well corrected by using a simple quenching equation. Correction for SOBP beam was conducted with a newly proposed method. The corrected SOBP signal was in accordance with the results measured with an ion chamber. We propose a new method to correct for the SOBP beam from the quenching effect in a scintillating fiber dosimetry system. This method can be applied to other scintillator dosimetry for radiation beams in which the quenching effect is shown in the scintillator.
NASA Astrophysics Data System (ADS)
Yan, Jiawei; Ke, Youqi
In realistic nanoelectronics, disordered impurities/defects are inevitable and play important roles in electron transport. However, due to the lack of effective quantum transport method, the important effects of disorders remain poorly understood. Here, we report a generalized non-equilibrium vertex correction (NVC) method with coherent potential approximation to treat the disorder effects in quantum transport simulation. With this generalized NVC method, any averaged product of two single-particle Green's functions can be obtained by solving a set of simple linear equations. As a result, the averaged non-equilibrium density matrix and various important transport properties, including averaged current, disordered induced current fluctuation and the averaged shot noise, can all be efficiently computed in a unified scheme. Moreover, a generalized form of conditionally averaged non-equilibrium Green's function is derived to incorporate with density functional theory to enable first-principles simulation. We prove the non-equilibrium coherent potential equals the non-equilibrium vertex correction. Our approach provides a unified, efficient and self-consistent method for simulating non-equilibrium quantum transport through disorder nanoelectronics. Shanghaitech start-up fund.
Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity
NASA Astrophysics Data System (ADS)
Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.
2004-12-01
Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.
Intensity correction for multichannel hyperpolarized 13C imaging of the heart.
Dominguez-Viqueira, William; Geraghty, Benjamin J; Lau, Justin Y C; Robb, Fraser J; Chen, Albert P; Cunningham, Charles H
2016-02-01
Develop and test an analytic correction method to correct the signal intensity variation caused by the inhomogeneous reception profile of an eight-channel phased array for hyperpolarized (13) C imaging. Fiducial markers visible in anatomical images were attached to the individual coils to provide three dimensional localization of the receive hardware with respect to the image frame of reference. The coil locations and dimensions were used to numerically model the reception profile using the Biot-Savart Law. The accuracy of the coil sensitivity estimation was validated with images derived from a homogenous (13) C phantom. Numerical coil sensitivity estimates were used to perform intensity correction of in vivo hyperpolarized (13) C cardiac images in pigs. In comparison to the conventional sum-of-squares reconstruction, improved signal uniformity was observed in the corrected images. The analytical intensity correction scheme was shown to improve the uniformity of multichannel image reconstruction in hyperpolarized [1-(13) C]pyruvate and (13) C-bicarbonate cardiac MRI. The method is independent of the pulse sequence used for (13) C data acquisition, simple to implement and does not require additional scan time, making it an attractive technique for multichannel hyperpolarized (13) C MRI. © 2015 Wiley Periodicals, Inc.
1946-01-01
geometrica ~ boundary condi- tions of the problem. (2) The energy of the load-plate system is computed for this deflection surface and is then minimized...and interpolating to find the k that makes the seriw vanish. The correct value of m is that which gives the lowest value of k. For two half waves (m=2...the square plate, the present rekdively simple upper- and lower-limit calcula- tions show that his est,imatd limit of error is correct for this case
Off-Angle Iris Correction Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J; Thompson, Joseph T; Karakaya, Mahmut
In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not accountmore » for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.« less
GURKA, MATTHEW J; KUPERMINC, MICHELLE N; BUSBY, MARJORIE G; BENNIS, JACEY A; GROSSBERG, RICHARD I; HOULIHAN, CHRISTINE M; STEVENSON, RICHARD D; HENDERSON, RICHARD C
2010-01-01
AIM To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). METHOD Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I–V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. RESULTS Slaughter’s equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat −9.6/100 [SD 6.2]; 95% confidence interval [CI] −11.0 to −8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI −1.0 to 1.3) than existing equations. INTERPRETATION A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP. PMID:19811518
Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin
2015-01-01
Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.
3D resolved mapping of optical aberrations in thick tissues
Zeng, Jun; Mahou, Pierre; Schanne-Klein, Marie-Claire; Beaurepaire, Emmanuel; Débarre, Delphine
2012-01-01
We demonstrate a simple method for mapping optical aberrations with 3D resolution within thick samples. The method relies on the local measurement of the variation in image quality with externally applied aberrations. We discuss the accuracy of the method as a function of the signal strength and of the aberration amplitude and we derive the achievable resolution for the resulting measurements. We then report on measured 3D aberration maps in human skin biopsies and mouse brain slices. From these data, we analyse the consequences of tissue structure and refractive index distribution on aberrations and imaging depth in normal and cleared tissue samples. The aberration maps allow the estimation of the typical aplanetism region size over which aberrations can be uniformly corrected. This method and data pave the way towards efficient correction strategies for tissue imaging applications. PMID:22876353
Fitting program for linear regressions according to Mahon (1996)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trappitsch, Reto G.
2018-01-09
This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.
Core-mass nonadiabatic corrections to molecules: H2, H2+, and isotopologues.
Diniz, Leonardo G; Alijah, Alexander; Mohallem, José Rachid
2012-10-28
For high-precision calculations of rovibrational states of light molecules, it is essential to include non-adiabatic corrections. In the absence of crossings of potential energy surfaces, they can be incorporated in a single surface picture through coordinate-dependent vibrational and rotational reduced masses. We present a compact method for their evaluation and relate in particular the vibrational mass to a well defined nuclear core mass derived from a Mulliken analysis of the electronic density. For the rotational mass we propose a simple, but very effective parametrization. The use of these masses in the nuclear Schrödinger equation yields numerical data for the corrections of a much higher quality than can be obtained with optimized constant masses, typically better than 0.1 cm(-1). We demonstrate the method for H(2), H(2)(+), and singly deuterated isotopologues. Isotopic asymmetry does not present any particular difficulty. Generalization to polyatomic molecules is straightforward.
Possible Explanation of the Different Temporal Behaviors of Various Classes of Sunspot Groups
NASA Astrophysics Data System (ADS)
Gao, Peng-Xin; Li, Ke-Jun; Li, Fu-Yu
2017-09-01
In order to investigate the periodicity and long-term trends of various classes of sunspot groups (SGs), we separated SGs into two categories: simple SGs (A/U ≤ 4.5, where A represents the total corrected whole spot area of the group in millionths of the solar hemisphere (msh), and U represents the total corrected umbral area of the group in msh); and complex SGs (A/U > 6.2). Based on the revised version of the Greenwich Photoheliographic Results sunspot catalogue, we investigated the periodic behaviors and long-term trends of simple and complex SGs from 1875 to 1976 using the Hilbert-Huang Transform method, and we confirm that the temporal behaviors of simple and complex SGs are quite different. Our main findings are as follows. i) For simple and complex SGs, the values of the Schwabe cycle wax and wane, following the solar activity cycle. ii) There are significant phase differences (almost antiphase) between the periodicity of 53.50 ± 3.79 years extracted from yearly simple SG numbers and the periodicity of 56.21 ± 2.92 years extracted from yearly complex SG numbers. iii) The adaptive trends of yearly simple and complex SG numbers are also quite different: for simple SGs, the values of the adaptive trend gradually increase during the time period of 1875 - 1949, then they decrease gradually from 1949 to 1976, similar to the rise and the maximum phase of a sine curve; for complex SGs, the values of the adaptive trend first slowly increase and then quickly increase, similar to the minimum and rise phase of a sine curve.
Dai, Guang-ming; Campbell, Charles E; Chen, Li; Zhao, Huawei; Chernyak, Dimitri
2009-01-20
In wavefront-driven vision correction, ocular aberrations are often measured on the pupil plane and the correction is applied on a different plane. The problem with this practice is that any changes undergone by the wavefront as it propagates between planes are not currently included in devising customized vision correction. With some valid approximations, we have developed an analytical foundation based on geometric optics in which Zernike polynomials are used to characterize the propagation of the wavefront from one plane to another. Both the boundary and the magnitude of the wavefront change after the propagation. Taylor monomials were used to realize the propagation because of their simple form for this purpose. The method we developed to identify changes in low-order aberrations was verified with the classical vertex correction formula. The method we developed to identify changes in high-order aberrations was verified with ZEMAX ray-tracing software. Although the method may not be valid for highly irregular wavefronts and it was only proven for wavefronts with low-order or high-order aberrations, our analysis showed that changes in the propagating wavefront are significant and should, therefore, be included in calculating vision correction. This new approach could be of major significance in calculating wavefront-driven vision correction whether by refractive surgery, contact lenses, intraocular lenses, or spectacles.
Use of the Wigner representation in scattering problems
NASA Technical Reports Server (NTRS)
Bemler, E. A.
1975-01-01
The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.
A simple modern correctness condition for a space-based high-performance multiprocessor
NASA Technical Reports Server (NTRS)
Probst, David K.; Li, Hon F.
1992-01-01
A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.
Decodoku: Quantum error rorrection as a simple puzzle game
NASA Astrophysics Data System (ADS)
Wootton, James
To build quantum computers, we need to detect and manage any noise that occurs. This will be done using quantum error correction. At the hardware level, QEC is a multipartite system that stores information non-locally. Certain measurements are made which do not disturb the stored information, but which do allow signatures of errors to be detected. Then there is a software problem. How to take these measurement outcomes and determine: a) The errors that caused them, and (b) how to remove their effects. For qubit error correction, the algorithms required to do this are well known. For qudits, however, current methods are far from optimal. We consider the error correction problem of qubit surface codes. At the most basic level, this is a problem that can be expressed in terms of a grid of numbers. Using this fact, we take the inherent problem at the heart of quantum error correction, remove it from its quantum context, and presented in terms of simple grid based puzzle games. We have developed three versions of these puzzle games, focussing on different aspects of the required algorithms. These have been presented and iOS and Android apps, allowing the public to try their hand at developing good algorithms to solve the puzzles. For more information, see www.decodoku.com. Funding from the NCCR QSIT.
SLTCAP: A Simple Method for Calculating the Number of Ions Needed for MD Simulation.
Schmit, Jeremy D; Kariyawasam, Nilusha L; Needham, Vince; Smith, Paul E
2018-04-10
An accurate depiction of electrostatic interactions in molecular dynamics requires the correct number of ions in the simulation box to capture screening effects. However, the number of ions that should be added to the box is seldom given by the bulk salt concentration because a charged biomolecule solute will perturb the local solvent environment. We present a simple method for calculating the number of ions that requires only the total solute charge, solvent volume, and bulk salt concentration as inputs. We show that the most commonly used method for adding salt to a simulation results in an effective salt concentration that is too high. These findings are confirmed using simulations of lysozyme. We have established a web server where these calculations can be readily performed to aid simulation setup.
Surface dose measurements with commonly used detectors: a consistent thickness correction method.
Reynolds, Tatsiana A; Higgins, Patrick
2015-09-08
The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30-360) with other parallel plate chambers RMI-449 (Attix), Capintec PS-033, PTW 30-329 (Markus) and Memorial. Measurements of surface dose for 6MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (-0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid-state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three-detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth-dose curves and is not recommended for these types of measurements.
Simple and Efficient Technique for Correction of Unilateral Scissor Bite Using Straight Wire.
Dolas, Siddhesh Gajanan; Chitko, Shrikant Shrinivas; Kerudi, Veerendra Virupaxappa; Patil, Harshal Ashok; Bonde, Prasad Vasudeo
2016-03-01
Unilateral scissor bite is a relatively rare malocclusion. However, its correction is often difficult and a challenge for the clinician. This article presents simple and efficient technique for the correction of severe unilateral scissor bite in a 14 year old boy, using 0.020 S.S. A. J. Wilcock wire (premium plus) out of the spool, with minimal adjustments and placed in mandibular arch. After about six weeks time, good amount of correction was seen in the lower arch and the lower molar had been relieved of scissor bite.
Effective classification of the prevalence of Schistosoma mansoni.
Mitchell, Shira A; Pagano, Marcello
2012-12-01
To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.
Robust diffraction correction method for high-frequency ultrasonic tissue characterization
NASA Astrophysics Data System (ADS)
Raju, Balasundar
2004-05-01
The computation of quantitative ultrasonic parameters such as the attenuation or backscatter coefficient requires compensation for diffraction effects. In this work a simple and accurate diffraction correction method for skin characterization requiring only a single focal zone is developed. The advantage of this method is that the transducer need not be mechanically repositioned to collect data from several focal zones, thereby reducing the time of imaging and preventing motion artifacts. Data were first collected under controlled conditions from skin of volunteers using a high-frequency system (center frequency=33 MHz, BW=28 MHz) at 19 focal zones through axial translation. Using these data, mean backscatter power spectra were computed as a function of the distance between the transducer and the tissue, which then served as empirical diffraction correction curves for subsequent data. The method was demonstrated on patients patch-tested for contact dermatitis. The computed attenuation coefficient slope was significantly (p<0.05) lower at the affected site (0.13+/-0.02 dB/mm/MHz) compared to nearby normal skin (0.2+/-0.05 dB/mm/MHz). The mean backscatter level was also significantly lower at the affected site (6.7+/-2.1 in arbitrary units) compared to normal skin (11.3+/-3.2). These results show diffraction corrected ultrasonic parameters can differentiate normal from affected skin tissues.
Guo, Ying; Little, Roderick J; McConnell, Daniel S
2012-01-01
Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.
Detection of defects on apple using B-spline lighting correction method
NASA Astrophysics Data System (ADS)
Li, Jiangbo; Huang, Wenqian; Guo, Zhiming
To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.
Anisophoria and aniseikonia. Part I. The relation between optical anisophoria and aniseikonia.
Remole, A
1989-10-01
Part I of this publication demonstrates and explains the close relation between aniseikonia and anisophoria induced by spectacles. It discusses the clinical implications of this relation by discussing certain aspects of aniseikonia theory, prismatic effects during oblique gaze through spectacles as for reading, and a simple formula that presents a comprehensive description of all prismatic effects and prismatic differences produced by a pair of spectacles. It also describes an easy method of specifying iseikonic lenses, as well as some conventional methods of measuring aniseikonia and anisophoria. Part II will deal with the correction and management of anisophoria when induced together with aniseikonia. Parts I and II, together, will convey a new approach toward the management of anisophoric spectacle corrections.
Use of Machine Learning to Identify Children with Autism and Their Motor Abnormalities
ERIC Educational Resources Information Center
Crippa, Alessandro; Salvatore, Christian; Perego, Paolo; Forti, Sara; Nobile, Maria; Molteni, Massimo; Castiglioni, Isabella
2015-01-01
In the present work, we have undertaken a proof-of-concept study to determine whether a simple upper-limb movement could be useful to accurately classify low-functioning children with autism spectrum disorder (ASD) aged 2-4. To answer this question, we developed a supervised machine-learning method to correctly discriminate 15 preschool children…
An Experiment-Oriented Approach to Teaching the Kinetic Molecular Theory.
ERIC Educational Resources Information Center
Wiseman, Frank L., Jr.
1979-01-01
This paper reports an experiment in the teaching of the kinetic molecular theory to nonscience majors by the inquiry method. It allows the student to develop an essentially correct view of gases, liquids, and solids on the atomic or molecular level, and illustrates how one can draw conclusions about the molecular level by simple visual…
Design of general apochromatic drift-quadrupole beam lines
NASA Astrophysics Data System (ADS)
Lindstrøm, C. A.; Adli, E.
2016-07-01
Chromatic errors are normally corrected using sextupoles in regions of large dispersion. In low emittance linear accelerators, use of sextupoles can be challenging. Apochromatic focusing is a lesser-known alternative approach, whereby chromatic errors of Twiss parameters are corrected without the use of sextupoles, and has consequently been subject to renewed interest in advanced linear accelerator research. Proof of principle designs were first established by Montague and Ruggiero and developed more recently by Balandin et al. We describe a general method for designing drift-quadrupole beam lines of arbitrary order in apochromatic correction, including analytic expressions for emittance growth and other merit functions. Worked examples are shown for plasma wakefield accelerator staging optics and for a simple final focus system.
NASA Astrophysics Data System (ADS)
Schiavo, Eduardo; Muñoz-García, Ana B.; Barone, Vincenzo; Vittadini, Andrea; Casarin, Maurizio; Forrer, Daniel; Pavone, Michele
2018-02-01
Common local and semi-local density functionals poorly describe the molecular physisorption on metal surfaces due to the lack of dispersion interactions. In the last decade, several correction schemes have been proposed to amend this fundamental flaw of Density Functional Theory. Using the prototypical case of aromatic molecules adsorbed on Ag(1 1 1), we discuss the accuracy of different dispersion-correction methods and present a reparameterization strategy for the simple and effective DFT-D2. For the adsorption of different aromatic systems on the same metallic substrate, good results at feasible computational costs are achieved by means of a fitting procedure against MP2 data.
Gordon, H R; Du, T; Zhang, T
1997-09-20
We provide an analysis of the influence of instrument polarization sensitivity on the radiance measured by spaceborne ocean color sensors. Simulated examples demonstrate the influence of polarization sensitivity on the retrieval of the water-leaving reflectance rho(w). A simple method for partially correcting for polarization sensitivity--replacing the linear polarization properties of the top-of-atmosphere reflectance with those from a Rayleigh-scattering atmosphere--is provided and its efficacy is evaluated. It is shown that this scheme improves rho(w) retrievals as long as the polarization sensitivity of the instrument does not vary strongly from band to band. Of course, a complete polarization-sensitivity characterization of the ocean color sensor is required to implement the correction.
From metadynamics to dynamics.
Tiwary, Pratyush; Parrinello, Michele
2013-12-06
Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.
NASA Astrophysics Data System (ADS)
Tiwary, Pratyush; Parrinello, Michele
2013-12-01
Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.
Prakash, Prashanth; Durgesh, B. H.
2011-01-01
Single tooth anterior dental crossbite is the commonly encountered malocclusion during the development of occlusion in children. Various treatment options such as removable and fixed appliances have been suggested by different authors in the past literature. This paper presents two cases of anterior crossbite corrected using the standard Catlan's appliance (Lower Inclined Bite Plane) in a short period of three weeks without any damage to the tooth or the periodontium. This fixed appliance is a simple and traditional method which does not depend on patient cooperation to reverse the bite. PMID:21991464
Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data
Zhao, Shanshan
2014-01-01
Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
Structural expansions for the ground state energy of a simple metal
NASA Technical Reports Server (NTRS)
Hammerberg, J.; Ashcroft, N. W.
1973-01-01
A structural expansion for the static ground state energy of a simple metal is derived. An approach based on single particle band structure which treats the electron gas as a non-linear dielectric is presented, along with a more general many particle analysis using finite temperature perturbation theory. The two methods are compared, and it is shown in detail how band-structure effects, Fermi surface distortions, and chemical potential shifts affect the total energy. These are of special interest in corrections to the total energy beyond third order in the electron ion interaction, and hence to systems where differences in energies for various crystal structures are exceptionally small. Preliminary calculations using these methods for the zero temperature thermodynamic functions of atomic hydrogen are reported.
H4: A challenging system for natural orbital functional approximations
NASA Astrophysics Data System (ADS)
Ramos-Cordoba, Eloy; Lopez, Xabier; Piris, Mario; Matito, Eduard
2015-10-01
The correct description of nondynamic correlation by electronic structure methods not belonging to the multireference family is a challenging issue. The transition of D2h to D4h symmetry in H4 molecule is among the most simple archetypal examples to illustrate the consequences of missing nondynamic correlation effects. The resurgence of interest in density matrix functional methods has brought several new methods including the family of Piris Natural Orbital Functionals (PNOF). In this work, we compare PNOF5 and PNOF6, which include nondynamic electron correlation effects to some extent, with other standard ab initio methods in the H4 D4h/D2h potential energy surface (PES). Thus far, the wrongful behavior of single-reference methods at the D2h-D4h transition of H4 has been attributed to wrong account of nondynamic correlation effects, whereas in geminal-based approaches, it has been assigned to a wrong coupling of spins and the localized nature of the orbitals. We will show that actually interpair nondynamic correlation is the key to a cusp-free qualitatively correct description of H4 PES. By introducing interpair nondynamic correlation, PNOF6 is shown to avoid cusps and provide the correct smooth PES features at distances close to the equilibrium, total and local spin properties along with the correct electron delocalization, as reflected by natural orbitals and multicenter delocalization indices.
Simple and Efficient Technique for Correction of Unilateral Scissor Bite Using Straight Wire
Dolas, Siddhesh Gajanan; Chitko, Shrikant Shrinivas; Kerudi, Veerendra Virupaxappa; Bonde, Prasad Vasudeo
2016-01-01
Unilateral scissor bite is a relatively rare malocclusion. However, its correction is often difficult and a challenge for the clinician. This article presents simple and efficient technique for the correction of severe unilateral scissor bite in a 14 year old boy, using 0.020 S.S. A. J. Wilcock wire (premium plus) out of the spool, with minimal adjustments and placed in mandibular arch. After about six weeks time, good amount of correction was seen in the lower arch and the lower molar had been relieved of scissor bite. PMID:27231682
Ground temperature measurement by PRT-5 for maps experiment
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.
Rosenbaum, David A.; Chapman, Kate M.; Coelho, Chase J.; Gong, Lanyun; Studenka, Breanna E.
2013-01-01
Actions that are chosen have properties that distinguish them from actions that are not. Of the nearly infinite possible actions that can achieve any given task, many of the unchosen actions are irrelevant, incorrect, or inappropriate. Others are relevant, correct, or appropriate but are disfavored for other reasons. Our research focuses on the question of what distinguishes actions that are chosen from actions that are possible but are not. We review studies that use simple preference methods to identify factors that contribute to action choices, especially for object-manipulation tasks. We can determine which factors are especially important through simple behavioral experiments. PMID:23761769
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamio, Y; Bouchard, H
2014-06-15
Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less
Dobie, Robert A; Wojcik, Nancy C
2015-07-13
The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik
2001-05-01
Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.
NASA Astrophysics Data System (ADS)
Sunaryo
2018-03-01
The research with entitle response of gravity, magnetic, and geoelectrical resistivity methods on Ngeni Southern Blitar mineralization zone has been done. This study aims to find the response of several geophysical methods of gravity, magnetic, and geoelectrical resistivity in an integrated manner. Gravity data acquisition was acquired 224 data which covers the whole region of Blitar district by using Gravity Meter La Coste & Romberg Model “G”, and magnetic data acquisition were acquired 195 data which covers the southern Blitar district only by using Proton Precession Magnetometer G-856. Meanwhile geoelectrical resistivity data only done in Ngeni village which is the location of phyropilite mining with the composition content of Fe, Si, Ca, S, Cu, and Mn by using ABEM Terrameter SAS 300C. Gravity data processing was performed to obtain the Bouguer anomaly value, which included unit conversion, tidal correction, drift correction, correction of tie point, base station correction, free air correction, and Bouguer correction. Magnetic data processing has been done by some corrections i.e daily, drift, and IGRF(International Geomagnetic Refference Field) to obtain the total magnetic anomaly. From gravity data processing has been obtained the simple Bouguer anomaly value in range from -10mGal until 115mGal. From this data processing has been obtained the total magnetic anomaly value in range from -650nT until 800nT. Meanwhile from geoelectrical resistivity 3.03Ωm until 11249.91 Ωm. There is a correlation between gravity anomaly, magnetic anomaly, and geoelectrical resistivity anomaly that are associated with deep anomaly, middle anomaly, and shallow anomaly.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
Comparison of techniques for correction of magnification of pelvic X-rays for hip surgery planning.
The, Bertram; Kootstra, Johan W J; Hosman, Anton H; Verdonschot, Nico; Gerritsma, Carina L E; Diercks, Ron L
2007-12-01
The aim of this study was to develop an accurate method for correction of magnification of pelvic x-rays to enhance accuracy of hip surgery planning. All investigated methods aim at estimating the anteroposterior location of the hip joint in supine position to correctly position a reference object for correction of magnification. An existing method-which is currently being used in clinical practice in our clinics-is based on estimating the position of the hip joint by palpation of the greater trochanter. It is only moderately accurate and difficult to execute reliably in clinical practice. To develop a new method, 99 patients who already had a hip implant in situ were included; this enabled determining the true location of the hip joint deducted from the magnification of the prosthesis. Physical examination was used to obtain predictor variables possibly associated with the height of the hip joint. This included a simple dynamic hip joint examination to estimate the position of the center of rotation. Prediction equations were then constructed using regression analysis. The performance of these prediction equations was compared with the performance of the existing protocol. The mean absolute error in predicting the height of the hip joint center using the old method was 20 mm (range -79 mm to +46 mm). This was 11 mm for the new method (-32 mm to +39 mm). The prediction equation is: height (mm) = 34 + 1/2 abdominal circumference (cm). The newly developed prediction equation is a superior method for predicting the height of the hip joint center for correction of magnification of pelvic x-rays. We recommend its implementation in the departments of radiology and orthopedic surgery.
Human-machine interaction to disambiguate entities in unstructured text and structured datasets
NASA Astrophysics Data System (ADS)
Ward, Kevin; Davenport, Jack
2017-05-01
Creating entity network graphs is a manual, time consuming process for an intelligence analyst. Beyond the traditional big data problems of information overload, individuals are often referred to by multiple names and shifting titles as they advance in their organizations over time which quickly makes simple string or phonetic alignment methods for entities insufficient. Conversely, automated methods for relationship extraction and entity disambiguation typically produce questionable results with no way for users to vet results, correct mistakes or influence the algorithm's future results. We present an entity disambiguation tool, DRADIS, which aims to bridge the gap between human-centric and machinecentric methods. DRADIS automatically extracts entities from multi-source datasets and models them as a complex set of attributes and relationships. Entities are disambiguated across the corpus using a hierarchical model executed in Spark allowing it to scale to operational sized data. Resolution results are presented to the analyst complete with sourcing information for each mention and relationship allowing analysts to quickly vet the correctness of results as well as correct mistakes. Corrected results are used by the system to refine the underlying model allowing analysts to optimize the general model to better deal with their operational data. Providing analysts with the ability to validate and correct the model to produce a system they can trust enables them to better focus their time on producing higher quality analysis products.
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
A Method for Automated Detection of Usability Problems from Client User Interface Events
Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.
2005-01-01
Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121
Lobigs, Louisa M; Sottas, Pierre-Edouard; Bourdon, Pitre C; Nikolovski, Zoran; El-Gingo, Mohamed; Varamenti, Evdokia; Peeling, Peter; Dawson, Brian; Schumacher, Yorck O
2018-02-01
The haematological module of the Athlete's Biological Passport (ABP) has significantly impacted the prevalence of blood manipulations in elite sports. However, the ABP relies on a number of concentration-based markers of erythropoiesis, such as haemoglobin concentration ([Hb]), which are influenced by shifts in plasma volume (PV). Fluctuations in PV contribute to the majority of biological variance associated with volumetric ABP markers. Our laboratory recently identified a panel of common chemistry markers (from a simple blood test) capable of describing ca 67% of PV variance, presenting an applicable method to account for volume shifts within anti-doping practices. Here, this novel PV marker was included into the ABP adaptive model. Over a six-month period (one test per month), 33 healthy, active males provided blood samples and performed the CO-rebreathing method to record PV (control). In the final month participants performed a single maximal exercise effort to promote a PV shift (mean PV decrease -17%, 95% CI -9.75 to -18.13%). Applying the ABP adaptive model, individualized reference limits for [Hb] and the OFF-score were created, with and without the PV correction. With the PV correction, an average of 66% of [Hb] within-subject variance is explained, narrowing the predicted reference limits, and reducing the number of atypical ABP findings post-exercise. Despite an increase in sensitivity there was no observed loss of specificity with the addition of the PV correction. The novel PV marker presented here has the potential to improve the ABP's rate of correct doping detection by removing the confounding effects of PV variance. Copyright © 2017 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Ivanov, Anisoara; Neacsu, Andrei
2011-01-01
This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…
Convolutional neural networks for vibrational spectroscopic data analysis.
Acquarelli, Jacopo; van Laarhoven, Twan; Gerretzen, Jan; Tran, Thanh N; Buydens, Lutgarde M C; Marchiori, Elena
2017-02-15
In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results. Copyright © 2016 Elsevier B.V. All rights reserved.
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.
Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C
2008-07-21
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.
The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.
Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin
2015-11-01
We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.
A Computational Framework for Automation of Point Defect Calculations
NASA Astrophysics Data System (ADS)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; Lany, Stephan; Stevanovic, Vladan; National Renewable Energy Laboratory, Golden, Colorado 80401 Collaboration
A complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory has been developed. The framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. The package provides the capability to compute widely accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3as test examples, we demonstrate the package capabilities and validate the methodology. We believe that a robust automated tool like this will enable the materials by design community to assess the impact of point defects on materials performance. National Renewable Energy Laboratory, Golden, Colorado 80401.
Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V
2010-12-15
We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol (- 1) for a test set of 120 organic molecules).
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
The LPM effect in sequential bremsstrahlung: dimensional regularization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin
The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less
Well-tempered metadynamics converges asymptotically.
Dama, James F; Parrinello, Michele; Voth, Gregory A
2014-06-20
Metadynamics is a versatile and capable enhanced sampling method for the computational study of soft matter materials and biomolecular systems. However, over a decade of application and several attempts to give this adaptive umbrella sampling method a firm theoretical grounding prove that a rigorous convergence analysis is elusive. This Letter describes such an analysis, demonstrating that well-tempered metadynamics converges to the final state it was designed to reach and, therefore, that the simple formulas currently used to interpret the final converged state of tempered metadynamics are correct and exact. The results do not rely on any assumption that the collective variable dynamics are effectively Brownian or any idealizations of the hill deposition function; instead, they suggest new, more permissive criteria for the method to be well behaved. The results apply to tempered metadynamics with or without adaptive Gaussians or boundary corrections and whether the bias is stored approximately on a grid or exactly.
Well-Tempered Metadynamics Converges Asymptotically
NASA Astrophysics Data System (ADS)
Dama, James F.; Parrinello, Michele; Voth, Gregory A.
2014-06-01
Metadynamics is a versatile and capable enhanced sampling method for the computational study of soft matter materials and biomolecular systems. However, over a decade of application and several attempts to give this adaptive umbrella sampling method a firm theoretical grounding prove that a rigorous convergence analysis is elusive. This Letter describes such an analysis, demonstrating that well-tempered metadynamics converges to the final state it was designed to reach and, therefore, that the simple formulas currently used to interpret the final converged state of tempered metadynamics are correct and exact. The results do not rely on any assumption that the collective variable dynamics are effectively Brownian or any idealizations of the hill deposition function; instead, they suggest new, more permissive criteria for the method to be well behaved. The results apply to tempered metadynamics with or without adaptive Gaussians or boundary corrections and whether the bias is stored approximately on a grid or exactly.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi
2016-01-28
Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less
The LPM effect in sequential bremsstrahlung: dimensional regularization
Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin
2016-10-19
The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less
An interactive Doppler velocity dealiasing scheme
NASA Astrophysics Data System (ADS)
Pan, Jiawen; Chen, Qi; Wei, Ming; Gao, Li
2009-10-01
Doppler weather radars are capable of providing high quality wind data at a high spatial and temporal resolution. However, operational application of Doppler velocity data from weather radars is hampered by the infamous limitation of the velocity ambiguity. This paper reviews the cause of velocity folding and presents the unfolding method recently implemented for the CINRAD systems. A simple interactive method for velocity data, which corrects de-aliasing errors, has been developed and tested. It is concluded that the algorithm is very efficient and produces high quality velocity data.
WE-G-18A-02: Calibration-Free Combined KV/MV Short Scan CBCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, M; Loo, B; Bazalova, M
Purpose: To combine orthogonal kilo-voltage (kV) and Mega-voltage (MV) projection data for short scan cone-beam CT to reduce imaging time on current radiation treatment systems, using a calibration-free gain correction method. Methods: Combining two orthogonal projection data sets for kV and MV imaging hardware can reduce the scan angle to as small as 110° (90°+fan) such that the total scan time is ∼18 seconds, or within a breath hold. To obtain an accurate reconstruction, the MV projection data is first linearly corrected using linear regression using the redundant data from the start and end of the sinogram, and then themore » combined data is reconstructed using the FDK method. To correct for the different changes of attenuation coefficients in kV/MV between soft tissue and bone, the forward projection of the segmented bone and soft tissue from the first reconstruction in the redundant region are added to the linear regression model. The MV data is corrected again using the additional information from the segmented image, and combined with kV for a second FDK reconstruction. We simulated polychromatic 120 kVp (conventional a-Si EPID with CsI) and 2.5 MVp (prototype high-DQE MV detector) projection data with Poisson noise using the XCAT phantom. The gain correction and combined kV/MV short scan reconstructions were tested with head and thorax cases, and simple contrast-to-noise ratio measurements were made in a low-contrast pattern in the head. Results: The FDK reconstruction using the proposed gain correction method can effectively reduce artifacts caused by the differences of attenuation coefficients in the kV/MV data. The CNRs of the short scans for kV, MV, and kV/MV are 5.0, 2.6 and 3.4 respectively. The proposed gain correction method also works with truncated projections. Conclusion: A novel gain correction and reconstruction method was developed to generate short scan CBCT from orthogonal kV/MV projections. This work is supported by NIH Grant 5R01CA138426-05.« less
Surface dose measurements with commonly used detectors: a consistent thickness correction method
Higgins, Patrick
2015-01-01
The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30‐360) with other parallel plate chambers RMI‐449 (Attix), Capintec PS‐033, PTW 30‐329 (Markus) and Memorial. Measurements of surface dose for 6 MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (−0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid‐state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three‐detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth‐dose curves and is not recommended for these types of measurements. PACS number: 87.56.‐v PMID:26699319
Susong, D.; Marks, D.; Garen, D.
1999-01-01
Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
Deterministic diffusion in flower-shaped billiards.
Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre
2002-08-01
We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.
A simple and robust method for artifacts correction on X-ray microtomography images
NASA Astrophysics Data System (ADS)
Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Irina, Bayuk; Kirill, Gerke
2017-04-01
X-ray microtomography images of rock material often have some kinds of distortion due to different reasons such as X-ray attenuation, beam hardening, irregularity of distribution of liquid/solid phases. Several kinds of distortion can arise from further image processing and stitching of images from different measurements. Beam-hardening is a well-known and studied distortion which is relative easy to be described, fitted and corrected using a number of equations. However, this is not the case for other grey scale intensity distortions. Shading by irregularity of distribution of liquid phases, incorrect scanner operating/parameters choosing, as well as numerous artefacts from mathematical reconstructions from projections, including stitching from separate scans cannot be described using single mathematical model. To correct grey scale intensities on large 3D images we developed a package Traditional method for removing the beam hardening [1] has been modified in order to find the center of distortion. The main contribution of this work is in development of a method for arbitrary image correction. This method is based on fitting the distortion by Bezier curve using image histogram. The distortion along the image is represented by a number of Bezier curves and one base line that characterizes the natural distribution of gray value along the image. All of these curves are set manually by the operator. We have tested our approaches on different X-ray microtomography images of porous media. Arbitrary correction removes all principal distortion. After correction the images has been binarized with subsequent pore-network extracted. Equal distribution of pore-network elements along the image was the criteria to verify the proposed technique to correct grey scale intensities. [1] Iassonov, P. and Tuller, M., 2010. Application of segmentation for correction of intensity bias in X-ray computed tomography images. Vadose Zone Journal, 9(1), pp.187-191.
Wu, Hsiao-Ming; Kreissl, Michael C; Schelbert, Heinrich R; Ladno, Waldemar; Prins, Mayumi; Shoghi-Jadid, Kooresh; Chatziioannou, Arion; Phelps, Michael E; Huang, Sung-Cheng
2005-10-01
In this study, we developed a simple and robust semi-automatic method to measure the right ventricle to left ventricle (RV-to-LV) transit time (TT) in mice using 2-[ 18 F]fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET). The accuracy of the method was first evaluated using a 4-D digital dynamic mouse phantom. The RV-to-LV TTs of twenty-nine mouse studies were measured using the new method and compared to those obtained from the conventional ROI-drawing method. The results showed that the new method correctly separated different structures (e.g., RV, lung, and LV) in the PET images and generated corresponding time activity curve (TAC) of each structure. The RV-to-LV TTs obtained from the new method and ROI method were not statistically different (P = 0.20; r = 0.76). We expect that this fast and robust method is applicable to the pathophysiology of cardiovascular diseases using small animal models such as rats and mice.
TG (Tri-Goniometry) technique: Obtaining perfect angles in Z-plasty planning with a simple ruler.
Görgülü, Tahsin; Olgun, Abdulkerim
2016-03-01
The Z-plasty is used frequently in hand surgery to release post-burn scar contractures. Correct angles and equalization of each limb are the most important parts of the Z-plasty technique. A simple ruler is enough for equalization of limb but a goniometer is needed for accuracy and equalization of angles. Classically, angles of 30°, 45°, 60°, 75°, and 90° are used. These angles are important when elongating a contracture line or decreasing tension. Our method uses only trigonometry coefficients and a simple ruler, which is easily obtained and sterilized, enabling surgeons to perform all types of Z-plasty perfectly without measuring angles using a goniometer. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
Estimation of descriptive statistics for multiply censored water quality data
Helsel, Dennis R.; Cohn, Timothy A.
1988-01-01
This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.
Huh, Yeamin; Smith, David E.; Feng, Meihau Rose
2014-01-01
Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879
Zhang, Yuzhong; Zhang, Yan
2016-07-01
In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong
2018-04-01
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.
NASA Astrophysics Data System (ADS)
Wang, Wenhui; Cao, Changyong; Ignatov, Alex; Li, Zhenglong; Wang, Likun; Zhang, Bin; Blonski, Slawomir; Li, Jun
2017-09-01
The Suomi NPP VIIRS thermal emissive bands (TEB) have been performing very well since data became available on January 20, 2012. The longwave infrared bands at 11 and 12 um (M15 and M16) are primarily used for sea surface temperature (SST) retrievals. A long standing anomaly has been observed during the quarterly warm-up-cool-down (WUCD) events. During such event daytime SST product becomes anomalous with a warm bias shown as a spike in the SST time series on the order of 0.2 K. A previous study (CAO et al. 2017) suggested that the VIIRS TEB calibration anomaly during WUCD is due to a flawed theoretical assumption in the calibration equation and proposed an Ltrace method to address the issue. This paper complements that study and presents operational implementation and validation of the Ltrace method for M15 and M16. The Ltrace method applies bias correction during WUCD only. It requires a simple code change and one-time calibration parameter look-up-table update. The method was evaluated using colocated CrIS observations and the SST algorithm. Our results indicate that the method can effectively reduce WUCD calibration anomaly in M15, with residual bias of 0.02 K after the correction. It works less effectively for M16, with residual bias of 0.04 K. The Ltrace method may over-correct WUCD calibration biases, especially for M16. However, the residual WUCD biases are small in both bands. Evaluation results using the SST algorithm show that the method can effectively remove SST anomaly during WUCD events.
Celler, Anna; Piwowarska-Bilska, Hanna; Shcherbinin, Sergey; Uribe, Carlos; Mikolajczak, Renata; Birkenfeld, Bozena
2014-01-01
Dead-time (DT) effects rarely cause problems in diagnostic single-photon emission computed tomography (SPECT) studies; however, in post-radionuclide-therapy imaging, DT can be substantial. Therefore, corrections may be necessary if quantitative images are used in image-based dosimetry or for evaluation of therapy outcomes. This task is particularly challenging if low-energy collimators are used. Our goal was to design a simple method to determine the dead-time correction factor (DTCF) without the need for phantom experiments and complex calculations. Planar and SPECT/CT scans of a water phantom containing a 70 ml bottle filled with lutetium-177 (Lu) were acquired over 60 days. Two small Lu markers were used in all scans. The DTCF based on the ratio of observed to true count rates measured over the entire spectrum and using photopeak primary photons only was estimated for phantom (DT present) and marker (no DT) scans. In addition, variations in counts in SPECT projections (potentially caused by varying bremsstrahlung and scatter) were investigated. For count rates that were about two-fold higher than typically seen in post-therapy Lu scans, the maximum DTCF reached a level of about 17%. The DTCF values determined directly from the phantom experiments using the total energy spectrum and photopeak counts only were equal to 13 and 16%, respectively. They were closely matched by those from the proposed marker-based method, which uses only two energy windows and measures photopeak primary photons (15-17%). A simple, marker-based method allowing for determination of the DTCF in high-activity Lu imaging studies has been proposed and validated using phantom experiments.
Improving Precision, Maintaining Accuracy, and Reducing Acquisition Time for Trace Elements in EPMA
NASA Astrophysics Data System (ADS)
Donovan, J.; Singer, J.; Armstrong, J. T.
2016-12-01
Trace element precision in electron probe micro analysis (EPMA) is limited by intrinsic random variation in the x-ray continuum. Traditionally we characterize background intensity by measuring on either side of the emission line and interpolating the intensity underneath the peak to obtain the net intensity. Alternatively, we can measure the background intensity at the on-peak spectrometer position using a number of standard materials that do not contain the element of interest. This so-called mean atomic number (MAN) background calibration (Donovan, et al., 2016) uses a set of standard measurements, covering an appropriate range of average atomic number, to iteratively estimate the continuum intensity for the unknown composition (and hence average atomic number). We will demonstrate that, at least for materials with a relatively simple matrix such as SiO2, TiO2, ZrSiO4, etc. where one may obtain a matrix matched standard for use in the so called "blank correction", we can obtain trace element accuracy comparable to traditional off-peak methods, and with improved precision, in about half the time. Donovan, Singer and Armstrong, A New EPMA Method for Fast Trace Element Analysis in Simple Matrices ", American Mineralogist, v101, p1839-1853, 2016 Figure 1. Uranium concentration line profiles from quantitative x-ray maps (20 keV, 100 nA, 5 um beam size and 4000 msec per pixel), for both off-peak and MAN background methods without (a), and with (b), the blank correction applied. We see precision significantly improved compared with traditional off-peak measurements while, in this case, the blank correction provides a small but discernable improvement in accuracy.
Simple technique to measure toric intraocular lens alignment and stability using a smartphone.
Teichman, Joshua C; Baig, Kashif; Ahmed, Iqbal Ike K
2014-12-01
Toric intraocular lenses (IOLs) are commonly implanted to correct corneal astigmatism at the time of cataract surgery. Their use requires preoperative calculation of the axis of implantation and postoperative measurement to determine whether the IOL has been implanted with the proper orientation. Moreover, toric IOL alignment stability over time is important for the patient and for the longitudinal evaluation of toric IOLs. We present a simple, inexpensive, and precise method to measure the toric IOL axis using a camera-enabled cellular phone (iPhone 5S) and computer software (ImageJ). Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Response Functions for Neutron Skyshine Analyses
NASA Astrophysics Data System (ADS)
Gui, Ah Auu
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.
Study of the vortex conditions of wings with large sweepback by extrapolation of the Jones method
NASA Technical Reports Server (NTRS)
Hirsch, P.
1980-01-01
The pockets of separation originating on the leading edges are surrounded by vortex sheets. Their configuration and intensity were determined by four conditions with the JONES approximation, which is itself corrected by a simple logic. Field pressures and stresses were computed for different cases and are compared with test results (pure deltas, swallow tails, truncations, strakes, ducks, fuselage).
Morel, Yann G.; Favoretto, Fabio
2017-01-01
All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint. PMID:28754028
Morel, Yann G; Favoretto, Fabio
2017-07-21
All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.
Lee, Young Han; Park, Eun Hae; Suh, Jin-Suck
2015-01-01
The objectives are: 1) to introduce a simple and efficient method for extracting region of interest (ROI) values from a Picture Archiving and Communication System (PACS) viewer using optical character recognition (OCR) software and a macro program, and 2) to evaluate the accuracy of this method with a PACS workstation. This module was designed to extract the ROI values on the images of the PACS, and created as a development tool by using open-source OCR software and an open-source macro program. The principal processes are as follows: (1) capture a region of the ROI values as a graphic file for OCR, (2) recognize the text from the captured image by OCR software, (3) perform error-correction, (4) extract the values including area, average, standard deviation, max, and min values from the text, (5) reformat the values into temporary strings with tabs, and (6) paste the temporary strings into the spreadsheet. This principal process was repeated for the number of ROIs. The accuracy of this module was evaluated on 1040 recognitions from 280 randomly selected ROIs of the magnetic resonance images. The input times of ROIs were compared between conventional manual method and this extraction module-assisted input method. The module for extracting ROI values operated successfully using the OCR and macro programs. The values of the area, average, standard deviation, maximum, and minimum could be recognized and error-corrected with AutoHotkey-coded module. The average input times using the conventional method and the proposed module-assisted method were 34.97 seconds and 7.87 seconds, respectively. A simple and efficient method for ROI value extraction was developed with open-source OCR and a macro program. Accurate inputs of various numbers from ROIs can be extracted with this module. The proposed module could be applied to the next generation of PACS or existing PACS that have not yet been upgraded. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
A simple but fully nonlocal correction to the random phase approximation
NASA Astrophysics Data System (ADS)
Ruzsinszky, Adrienn; Perdew, John P.; Csonka, Gábor I.
2011-03-01
The random phase approximation (RPA) stands on the top rung of the ladder of ground-state density functional approximations. The simple or direct RPA has been found to predict accurately many isoelectronic energy differences. A nonempirical local or semilocal correction to this direct RPA leaves isoelectronic energy differences almost unchanged, while improving total energies, ionization energies, etc., but fails to correct the RPA underestimation of molecular atomization energies. Direct RPA and its semilocal correction may miss part of the middle-range multicenter nonlocality of the correlation energy in a molecule. Here we propose a fully nonlocal, hybrid-functional-like addition to the semilocal correction. The added full nonlocality is important in molecules, but not in atoms. Under uniform-density scaling, this fully nonlocal correction scales like the second-order-exchange contribution to the correlation energy, an important part of the correction to direct RPA, and like the semilocal correction itself. For the atomization energies of ten molecules, and with the help of one fit parameter, it performs much better than the elaborate second-order screened exchange correction.
2011-01-01
Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science. PMID:21816107
Ferreira, Adriano Martison; Bonesso, Mariana Fávero; Mondelli, Alessandro Lia; da Cunha, Maria de Lourdes Ribeiro de Souza
2012-12-01
The emergence of Staphylococcus spp. not only as human pathogens, but also as reservoirs of antibiotic resistance determinants, requires the development of methods for their rapid and reliable identification in medically important samples. The aim of this study was to compare three phenotypic methods for the identification of Staphylococcus spp. isolated from patients with urinary tract infection using the PCR of the 16S-23S interspace region generating molecular weight patterns (ITR-PCR) as reference. All 57 S. saprophyticus studied were correctly identified using only the novobiocin disk. A rate of agreement of 98.0% was obtained for the simplified battery of biochemical tests in relation to ITR-PCR, whereas the Vitek I system and novobiocin disk showed 81.2% and 89.1% agreement, respectively. No other novobiocin-resistant non-S. saprophyticus strain was identified. Thus, the novobiocin disk is a feasible alternative for the identification of S. saprophyticus in urine samples in laboratories with limited resources. ITR-PCR and the simplified battery of biochemical tests were more reliable than the commercial systems currently available. This study confirms that automated systems are still unable to correctly differentiate CoNS species and that simple, reliable and inexpensive methods can be used for routine identification. Copyright © 2012 Elsevier B.V. All rights reserved.
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Adaptive correction of ensemble forecasts
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane
2017-04-01
Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.
Fekete, Attila; Komáromi, István
2016-12-07
A proteolytic reaction of papain with a simple peptide model substrate N-methylacetamide has been studied. Our aim was twofold: (i) we proposed a plausible reaction mechanism with the aid of potential energy surface scans and second geometrical derivatives calculated at the stationary points, and (ii) we investigated the applicability of the dispersion corrected density functional methods in comparison with the popular hybrid generalized gradient approximations (GGA) method (B3LYP) without such a correction in the QM/MM calculations for this particular problem. In the resting state of papain the ion pair and neutral forms of the Cys-His catalytic dyad have approximately the same energy and they are separated by only a small barrier. Zero point vibrational energy correction shifted this equilibrium slightly to the neutral form. On the other hand, the electrostatic solvation free energy corrections, calculated using the Poisson-Boltzmann method for the structures sampled from molecular dynamics simulation trajectories, resulted in a more stable ion-pair form. All methods we applied predicted at least a two elementary step acylation process via a zwitterionic tetrahedral intermediate. Using dispersion corrected DFT methods the thioester S-C bond formation and the proton transfer from histidine occur in the same elementary step, although not synchronously. The proton transfer lags behind (or at least does not precede) the S-C bond formation. The predicted transition state corresponds mainly to the S-C bond formation while the proton is still on the histidine Nδ atom. In contrast, the B3LYP method using larger basis sets predicts a transition state in which the S-C bond is almost fully formed and the transition state can be mainly featured by the Nδ(histidine) to N(amid) proton transfer. Considerably lower activation energy was predicted (especially by the B3LYP method) for the next amide bond breaking elementary step of acyl-enzyme formation. Deacylation appeared to be a single elementary step process in all the methods we applied.
Removing ring artefacts from synchrotron radiation-based hard x-ray tomography data
NASA Astrophysics Data System (ADS)
Thalmann, Peter; Bikis, Christos; Schulz, Georg; Paleo, Pierre; Mirone, Alessandro; Rack, Alexander; Siegrist, Stefan; Cörek, Emre; Huwyler, Jörg; Müller, Bert
2017-09-01
In hard X-ray microtomography, ring artefacts regularly originate from incorrectly functioning pixel elements on the detector or from particles and scratches on the scintillator. We show that due to the high sensitivity of contemporary beamline setups further causes inducing inhomogeneities in the impinging wavefronts have to be considered. We propose in this study a method to correct the thereby induced failure of simple flatfield approaches. The main steps of the pipeline are (i) registration of the reference images with the radiographs (projections), (ii) integration of the flat-field corrected projection over the acquisition angle, (iii) high-pass filtering of the integrated projection, (iv) subtraction of filtered data from the flat-field corrected projections. The performance of the protocol is tested on data sets acquired at the beamline ID19 at ESRF using single distance phase tomography.
Mocz, G.
1995-01-01
Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882
Effect of signal intensity and camera quantization on laser speckle contrast analysis
Song, Lipei; Elson, Daniel S.
2012-01-01
Laser speckle contrast analysis (LASCA) is limited to being a qualitative method for the measurement of blood flow and tissue perfusion as it is sensitive to the measurement configuration. The signal intensity is one of the parameters that can affect the contrast values due to the quantization of the signals by the camera and analog-to-digital converter (ADC). In this paper we deduce the theoretical relationship between signal intensity and contrast values based on the probability density function (PDF) of the speckle pattern and simplify it to a rational function. A simple method to correct this contrast error is suggested. The experimental results demonstrate that this relationship can effectively compensate the bias in contrast values induced by the quantized signal intensity and correct for bias induced by signal intensity variations across the field of view. PMID:23304650
Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan
2017-01-01
This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%. PMID:28273845
Spooner, Jennifer; Keen, Jenny; Nayyar, Kalpana; Birkett, Neil; Bond, Nicholas; Bannister, David; Tigue, Natalie; Higazi, Daniel; Kemp, Benjamin; Vaughan, Tristan; Kippen, Alistair; Buchanan, Andrew
2015-07-01
Fabs are an important class of antibody fragment as both research reagents and therapeutic agents. There are a plethora of methods described for their recombinant expression and purification. However, these do not address the issue of excessive light chain production that forms light chain dimers nor do they describe a universal purification strategy. Light chain dimer impurities and the absence of a universal Fab purification strategy present persistent challenges for biotechnology applications using Fabs, particularly around the need for bespoke purification strategies. This study describes methods to address light chain dimer formation during Fab expression and identifies a novel CH 1 affinity resin as a simple and efficient one-step purification for correctly assembled Fab. © 2015 Wiley Periodicals, Inc.
Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan
2017-03-04
This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%.
A Synthetic Quadrature Phase Detector/Demodulator for Fourier Transform Transform Spectrometers
NASA Technical Reports Server (NTRS)
Campbell, Joel
2008-01-01
A method is developed to demodulate (velocity correct) Fourier transform spectrometer (FTS) data that is taken with an analog to digital converter that digitizes equally spaced in time. This method makes it possible to use simple low cost, high resolution audio digitizers to record high quality data without the need for an event timer or quadrature laser hardware, and makes it possible to use a metrology laser of any wavelength. The reduced parts count and simplicity implementation makes it an attractive alternative in space based applications when compared to previous methods such as the Brault algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Chengjun; Markussen, Troels; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk
We study the effect of functional groups (CH{sub 3}*4, OCH{sub 3}, CH{sub 3}, Cl, CN, F*4) on the electronic transport properties of 1,4-benzenediamine molecular junctions using the non-equilibrium Green function method. Exchange and correlation effects are included at various levels of theory, namely density functional theory (DFT), energy level-corrected DFT (DFT+Σ), Hartree-Fock and the many-body GW approximation. All methods reproduce the expected trends for the energy of the frontier orbitals according to the electron donating or withdrawing character of the substituent group. However, only the GW method predicts the correct ordering of the conductance amongst the molecules. The absolute GWmore » (DFT) conductance is within a factor of two (three) of the experimental values. Correcting the DFT orbital energies by a simple physically motivated scissors operator, Σ, can bring the DFT conductances close to experiments, but does not improve on the relative ordering. We ascribe this to a too strong pinning of the molecular energy levels to the metal Fermi level by DFT which suppresses the variation in orbital energy with functional group.« less
Kangas, Michael J; Burks, Raychelle M; Atwater, Jordyn; Lukowicz, Rachel M; Garver, Billy; Holmes, Andrea E
2018-02-01
With the increasing availability of digital imaging devices, colorimetric sensor arrays are rapidly becoming a simple, yet effective tool for the identification and quantification of various analytes. Colorimetric arrays utilize colorimetric data from many colorimetric sensors, with the multidimensional nature of the resulting data necessitating the use of chemometric analysis. Herein, an 8 sensor colorimetric array was used to analyze select acid and basic samples (0.5 - 10 M) to determine which chemometric methods are best suited for classification quantification of analytes within clusters. PCA, HCA, and LDA were used to visualize the data set. All three methods showed well-separated clusters for each of the acid or base analytes and moderate separation between analyte concentrations, indicating that the sensor array can be used to identify and quantify samples. Furthermore, PCA could be used to determine which sensors showed the most effective analyte identification. LDA, KNN, and HQI were used for identification of analyte and concentration. HQI and KNN could be used to correctly identify the analytes in all cases, while LDA correctly identified 95 of 96 analytes correctly. Additional studies demonstrated that controlling for solvent and image effects was unnecessary for all chemometric methods utilized in this study.
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.
Brette, Romain; Gerstner, Wulfram
2005-11-01
We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.
Corrigendum: New Form of Kane's Equations of Motion for Constrained Systems
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Bajodah, Abdulrahman H.; Hodges, Dewey H.; Chen, Ye-Hwa
2007-01-01
A correction to the previously published article "New Form of Kane's Equations of Motion for Constrained Systems" is presented. Misuse of the transformation matrix between time rates of change of the generalized coordinates and generalized speeds (sometimes called motion variables) resulted in a false conclusion concerning the symmetry of the generalized inertia matrix. The generalized inertia matrix (sometimes referred to as the mass matrix) is in fact symmetric and usually positive definite when one forms nonminimal Kane's equations for holonomic or simple nonholonomic systems, systems subject to nonlinear nonholonomic constraints, and holonomic or simple nonholonomic systems subject to impulsive constraints according to Refs. 1, 2, and 3, respectively. The mass matrix is of course symmetric when one forms minimal equations for holonomic or simple nonholonomic systems using Kane s method as set forth in Ref. 4.
Simple and accurate sum rules for highly relativistic systems
NASA Astrophysics Data System (ADS)
Cohen, Scott M.
2005-03-01
In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.
NASA Astrophysics Data System (ADS)
Niu, Chaojun; Han, Xiang'e.
2015-10-01
Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.
ERIC Educational Resources Information Center
le Clercq, Carlijn M. P.; van der Schroeff, Marc P.; Rispens, Judith E.; Ruytjens, Liesbet; Goedegebure, André; van Ingen, Gijs; Franken, Marie-Christine
2017-01-01
Purpose: The purpose of this research note was to validate a simplified version of the Dutch nonword repetition task (NWR; Rispens & Baker, 2012). The NWR was shortened and scoring was transformed to correct/incorrect nonwords, resulting in the shortened NWR (NWR-S). Method: NWR-S and NWR performance were compared in the previously published…
Scene-based nonuniformity correction technique for infrared focal-plane arrays.
Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong
2009-04-20
A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.
Super-resolution pupil filtering for visual performance enhancement using adaptive optics
NASA Astrophysics Data System (ADS)
Zhao, Lina; Dai, Yun; Zhao, Junlei; Zhou, Xiaojun
2018-05-01
Ocular aberration correction can significantly improve visual function of the human eye. However, even under ideal aberration correction conditions, pupil diffraction restricts the resolution of retinal images. Pupil filtering is a simple super-resolution (SR) method that can overcome this diffraction barrier. In this study, a 145-element piezoelectric deformable mirror was used as a pupil phase filter because of its programmability and high fitting accuracy. Continuous phase-only filters were designed based on Zernike polynomial series and fitted through closed-loop adaptive optics. SR results were validated using double-pass point spread function images. Contrast sensitivity was further assessed to verify the SR effect on visual function. An F-test was conducted for nested models to statistically compare different CSFs. These results indicated CSFs for the proposed SR filter were significantly higher than the diffraction correction (p < 0.05). As such, the proposed filter design could provide useful guidance for supernormal vision optical correction of the human eye.
"ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"
DOE Office of Scientific and Technical Information (OSTI.GOV)
SANTHI, NANDAKISHORE
We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less
Partial volume correction using cortical surfaces
NASA Astrophysics Data System (ADS)
Blaasvær, Kamille R.; Haubro, Camilla D.; Eskildsen, Simon F.; Borghammer, Per; Otzen, Daniel; Ostergaard, Lasse R.
2010-03-01
Partial volume effect (PVE) in positron emission tomography (PET) leads to inaccurate estimation of regional metabolic activities among neighbouring tissues with different tracer concentration. This may be one of the main limiting factors in the utilization of PET in clinical practice. Partial volume correction (PVC) methods have been widely studied to address this issue. MRI based PVC methods are well-established.1 Their performance depend on the quality of the co-registration of the MR and PET dataset, on the correctness of the estimated point-spread function (PSF) of the PET scanner and largely on the performance of the segmentation method that divide the brain into brain tissue compartments.1, 2 In the present study a method for PVC is suggested, that utilizes cortical surfaces, to obtain detailed anatomical information. The objectives are to improve the performance of PVC, facilitate a study of the relationship between metabolic activity in the cerebral cortex and cortical thicknesses, and to obtain an improved visualization of PET data. The gray matter metabolic activity after performing PVC was recovered by 99.7 - 99.8 % , in relation to the true activity when testing on simple simulated data with different PSFs and by 97.9 - 100 % when testing on simulated brain PET data at different cortical thicknesses. When studying the relationship between metabolic activities and anatomical structures it was shown on simulated brain PET data, that it is important to correct for PVE in order to get the true relationship.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Varnai, Tamas; Levy, Robert
2016-01-01
A transition zone exists between cloudy skies and clear sky; such that, clouds scatter solar radiation into clear-sky regions. From a satellite perspective, it appears that clouds enhance the radiation nearby. We seek a simple method to estimate this enhancement, since it is so computationally expensive to account for all three-dimensional (3-D) scattering processes. In previous studies, we developed a simple two-layer model (2LM) that estimated the radiation scattered via cloud-molecular interactions. Here we have developed a new model to account for cloud-surface interaction (CSI). We test the models by comparing to calculations provided by full 3-D radiative transfer simulations of realistic cloud scenes. For these scenes, the Moderate Resolution Imaging Spectroradiometer (MODIS)-like radiance fields were computed from the Spherical Harmonic Discrete Ordinate Method (SHDOM), based on a large number of cumulus fields simulated by the University of California, Los Angeles (UCLA) large eddy simulation (LES) model. We find that the original 2LM model that estimates cloud-air molecule interactions accounts for 64 of the total reflectance enhancement and the new model (2LM+CSI) that also includes cloud-surface interactions accounts for nearly 80. We discuss the possibility of accounting for cloud-aerosol radiative interactions in 3-D cloud-induced reflectance enhancement, which may explain the remaining 20 of enhancements. Because these are simple models, these corrections can be applied to global satellite observations (e.g., MODIS) and help to reduce biases in aerosol and other clear-sky retrievals.
A Booklet on Participants’ Rights to Improve Consent for Clinical Research: A Randomized Trial
Benatar, Jocelyne R.; Mortimer, John; Stretton, Matthew; Stewart, Ralph A. H.
2012-01-01
Objective Information on the rights of subjects in clinical trials has become increasingly complex and difficult to understand. This study evaluates whether a simple booklet which is relevant to all research studies improves the understanding of rights needed for subjects to provide informed consent. Methods 21 currently used informed consent forms (ICF) from international clinical trials were separated into information related to the specific research study, and general information on participants’ rights. A booklet designed to provide information on participants’ rights which used simple language was developed to replace this information in current ICF’s Readability of each component of ICF’s and the booklet was then assessed using the Flesch-Kincaid Reading ease score (FK). To further evaluate the booklet 282 hospital inpatients were randomised to one of three ways to present research information; a standard ICF, the booklet combined with a short ICF, or the booklet combined with a simplified ICF. Comprehension of information related to the research proposal and to participant’s rights was assessed by questionnaire. Results Information related to participants’ rights contributed an average of 44% of the words in standard ICFs, and was harder to read than information describing the clinical trial (FK 25 versus (vs.) 41 respectively, p = 0.0003). The booklet reduced the number of words and improved FK from 25 to 42. The simplified ICF had a slightly higher FK score than the standard ICF (50 vs. 42). Comprehension assessed in inpatients was better for the booklet and short ICF 62%, (95% confidence interval (CI) 56 to 67) correct, or simplified ICF 62% (CI 58 to 68) correct compared to 52%, (CI 47 to 57) correct for the standard ICF, p = 0.009. This was due to better understanding of questions on rights (62% vs. 49% correct, p = 0.0008). Comprehension of study related information was similar for the simplified and standard ICF (60% vs. 64% correct, p = 0.68). Conclusions A booklet provides a simple consistent approach to providing information on participant rights which is relevant to all research studies, and improves comprehension of patients who typically participate in clinical trials. PMID:23094034
Atrial corrected Fourier amplitude ratios for the scintigraphic quantitation of valvar regurgitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dae, M.W.; Botvinick, E.H.; O'Connell, J.W.
1984-01-01
Current scintigraphic methods commonly overestimate the degree of valvar regurgitation (VR), and displace normal ratios from unity, owing largely to RA contamination of the RV region of interest in the ''best septal'' LAO projection. The authors developed a method to correct for this overlap, using the Fourier amplitude (AMP) ratio. Amplitude is first ''weighted'' for phase angle using a vectorial sum, to improve assessment in patients (PTS) with contraction abnormalities. RV AMP is then corrected for underestimation by adding the product of mean LAO RA AMP times the difference between RA areas in anterior and LAO projections to the calculatedmore » RV AMP. In 15 PTS with aortic or mitral VR, corrected AMP ratios (CAR) were compared to ratios assessed angiographically and in 12 PTS without VR were compared to uncorrected AMP ratios (UAR), and to stroke volume ratios (SVR) from SV images (SVI), and ED and ES counts data (CT). CAR interobserver agreement was high (R=.97). When VR PTS ranked by CAR as mild (1.3-1.8), moderate (1.9-2.5), or severe (>2.5) were compared to similar chatheterization based ranks, there were no significant differences using the Mann Whitney test for ordinal data. CAR is a simple, objective and reproducible method of quantitating VR. It reduces the error in those without VR, allows sensitive identification of mild VR, and maintains accurate assessment of severe VR.« less
Cosmetic reconstruction of temporal defect following pterional [corrected] craniotomy.
Badie, B
1996-04-01
Depression of the temporal fossa that is often caused by atrophy of the temporalis muscle or superficial temporal fat pad may be an unavoidable defect following pterional craniotomy. Various techniques have been previously described to correct this disfiguring defect. Most techniques, however, require drilling holes into the cranium or the synthetic grafts for attachment of the temporalis muscle. A simple method is described by which a temporal fossa depression is repaired with methylmethacrylate bone cement and a new superior temporal line is created for attachment of the temporalis muscle without the need to drill suture holes into the acrylic or the cranium. The technique described has been used on several patients with excellent cosmetic outcome.
Barnett, Patrick D; Strange, K Alicia; Angel, S Michael
2017-06-01
This work describes a method of applying the Fourier transform to the two-dimensional Fizeau fringe patterns generated by the spatial heterodyne Raman spectrometer (SHRS), a dispersive interferometer, to correct the effects of certain types of optical alignment errors. In the SHRS, certain types of optical misalignments result in wavelength-dependent and wavelength-independent rotations of the fringe pattern on the detector. We describe here a simple correction technique that can be used in post-processing, by applying the Fourier transform in a row-by-row manner. This allows the user to be more forgiving of fringe alignment and allows for a reduction in the mechanical complexity of the SHRS.
Reflective array modeling for reflective and directional SAW transducers.
Morgan, D P
1998-01-01
This paper presents a new approximate method for analyzing reflective SAW transducers, with much of the convenience of the coupled-mode (COM) method but with better accuracy. Transduction accuracy is obtained by incorporating the accurate electrostatic solution, giving for example correct harmonics, and allowance for electrode width variation, in a simple manner. Results are shown for a single-electrode transducer, Natural SPUDT and DART SPUDT, each using theoretically derived parameters. In contrast to the COM, the RAM can give accurate results for short or withdrawal-weighted transducers and for wide analysis bandwidth.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-01-01
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406
Efficient Multi-Atlas Registration using an Intermediate Template Image
Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.
2017-01-01
Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3–4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects. PMID:28943702
Efficient multi-atlas registration using an intermediate template image
NASA Astrophysics Data System (ADS)
Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.
2017-03-01
Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3-4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects.
Barker, Charles E.; Dallegge, Todd A.; Clark, Arthur C.
2002-01-01
We have updated a simple polyvinyl chloride plastic canister design by adding internal headspace temperature measurement, and redesigned it so it is made with mostly off-the-shelf components for ease of construction. Using self-closing quick connects, this basic canister is mated to a zero-head manometer to make a simple coalbed methane desorption system that is easily transported in small aircraft to remote localities. This equipment is used to gather timed measurements of pressure, volume and temperature data that are corrected to standard pressure and temperature (STP) and graphically analyzed using an Excel(tm)-based spreadsheet. Used together these elements form an effective, practical canister desorption method.
Christensen, Chloe L; Choy, Francis Y M
2017-02-24
Ease of design, relatively low cost and a multitude of gene-altering capabilities have all led to the adoption of the sophisticated and yet simple gene editing system: clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9). The CRISPR/Cas9 system holds promise for the correction of deleterious mutations by taking advantage of the homology directed repair pathway and by supplying a correction template to the affected patient's cells. Currently, this technique is being applied in vitro in human-induced pluripotent stem cells (iPSCs) to correct a variety of severe genetic diseases, but has not as of yet been used in iPSCs derived from patients affected with a lysosomal storage disease (LSD). If adopted into clinical practice, corrected iPSCs derived from cells that originate from the patient themselves could be used for therapeutic amelioration of LSD symptoms without the risks associated with allogeneic stem cell transplantation. CRISPR/Cas9 editing in a patient's cells would overcome the costly, lifelong process associated with currently available treatment methods, including enzyme replacement and substrate reduction therapies. In this review, the overall utility of the CRISPR/Cas9 gene editing technique for treatment of genetic diseases, the potential for the treatment of LSDs and methods currently employed to increase the efficiency of this re-engineered biological system will be discussed.
Delgado Reyes, Lourdes M; Bohache, Kevin; Wijeakumar, Sobanawartiny; Spencer, John P
2018-04-01
Motion artifacts are often a significant component of the measured signal in functional near-infrared spectroscopy (fNIRS) experiments. A variety of methods have been proposed to address this issue, including principal components analysis (PCA), correlation-based signal improvement (CBSI), wavelet filtering, and spline interpolation. The efficacy of these techniques has been compared using simulated data; however, our understanding of how these techniques fare when dealing with task-based cognitive data is limited. Brigadoi et al. compared motion correction techniques in a sample of adult data measured during a simple cognitive task. Wavelet filtering showed the most promise as an optimal technique for motion correction. Given that fNIRS is often used with infants and young children, it is critical to evaluate the effectiveness of motion correction techniques directly with data from these age groups. This study addresses that problem by evaluating motion correction algorithms implemented in HomER2. The efficacy of each technique was compared quantitatively using objective metrics related to the physiological properties of the hemodynamic response. Results showed that targeted PCA (tPCA), spline, and CBSI retained a higher number of trials. These techniques also performed well in direct head-to-head comparisons with the other approaches using quantitative metrics. The CBSI method corrected many of the artifacts present in our data; however, this approach produced sometimes unstable HRFs. The targeted PCA and spline methods proved to be the most robust, performing well across all comparison metrics. When compared head to head, tPCA consistently outperformed spline. We conclude, therefore, that tPCA is an effective technique for correcting motion artifacts in fNIRS data from young children.
Translation, Cultural Adaptation and Validation of the Simple Shoulder Test to Spanish
Arcuri, Francisco; Barclay, Fernando; Nacul, Ivan
2015-01-01
Background: The validation of widely used scales facilitates the comparison across international patient samples. Objective: The objective was to translate, culturally adapt and validate the Simple Shoulder Test into Argentinian Spanish. Methods: The Simple Shoulder Test was translated from English into Argentinian Spanish by two independent translators, translated back into English and evaluated for accuracy by an expert committee to correct the possible discrepancies. It was then administered to 50 patients with different shoulder conditions.Psycometric properties were analyzed including internal consistency, measured with Cronbach´s Alpha, test-retest reliability at 15 days with the interclass correlation coefficient. Results: The internal consistency, validation, was an Alpha of 0,808, evaluated as good. The test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.835, evaluated as excellent. Conclusion: The Simple Shoulder Test translation and it´s cultural adaptation to Argentinian-Spanish demonstrated adequate internal reliability and validity, ultimately allowing for its use in the comparison with international patient samples.
Harmonics analysis of the ITER poloidal field converter based on a piecewise method
NASA Astrophysics Data System (ADS)
Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU
2017-12-01
Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-06-23
We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less
NASA Technical Reports Server (NTRS)
1973-01-01
An analysis of Very Low Frequency propagation in the atmosphere in the 10-14 kHz range leads to a discussion of some of the more significant causes of phase perturbation. The method of generating sky-wave corrections to predict the Omega phase is discussed. Composite Omega is considered as a means of lane identification and of reducing Omega navigation error. A simple technique for generating trapezoidal model (T-model) phase prediction is presented and compared with the Navy predictions and actual phase measurements. The T-model prediction analysis illustrates the ability to account for the major phase shift created by the diurnal effects on the lower ionosphere. An analysis of the Navy sky-wave correction table is used to provide information about spatial and temporal correlation of phase correction relative to the differential mode of operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Fast sweeping method for the factored eikonal equation
NASA Astrophysics Data System (ADS)
Fomel, Sergey; Luo, Songting; Zhao, Hongkai
2009-09-01
We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.
NASA Astrophysics Data System (ADS)
Medjoubi, K.; Dawiec, A.
2017-12-01
A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.
Statistical testing of association between menstruation and migraine.
Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G
2015-02-01
To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.
On some variational acceleration techniques and related methods for local refinement
NASA Astrophysics Data System (ADS)
Teigland, Rune
1998-10-01
This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.
Scene-based nonuniformity correction algorithm based on interframe registration.
Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao
2011-06-01
In this paper, we present a simple and effective scene-based nonuniformity correction (NUC) method for infrared focal plane arrays based on interframe registration. This method estimates the global translation between two adjacent frames and minimizes the mean square error between the two properly registered images to make any two detectors with the same scene produce the same output value. In this way, the accumulation of the registration error can be avoided and the NUC can be achieved. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of the proposed technique is thoroughly studied with infrared image sequences with simulated nonuniformity and infrared imagery with real nonuniformity. It shows a significantly fast and reliable fixed-pattern noise reduction and obtains an effective frame-by-frame adaptive estimation of each detector's gain and offset.
3D Space Radiation Transport in a Shielded ICRU Tissue Sphere
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.
Optimizing the rapid measurement of detection thresholds in infants
Jones, Pete R.; Kalwarowsky, Sarah; Braddick, Oliver J.; Atkinson, Janette; Nardini, Marko
2015-01-01
Accurate measures of perceptual threshold are difficult to obtain in infants. In a clinical context, the challenges are particularly acute because the methods must yield meaningful results quickly and within a single individual. The present work considers how best to maximize speed, accuracy, and reliability when testing infants behaviorally and suggests some simple principles for improving test efficiency. Monte Carlo simulations, together with empirical (visual acuity) data from 65 infants, are used to demonstrate how psychophysical methods developed with adults can produce misleading results when applied to infants. The statistical properties of an effective clinical infant test are characterized, and based on these, it is shown that (a) a reduced (false-positive) guessing rate can greatly increase test efficiency, (b) the ideal threshold to target is often below 50% correct, and (c) simply taking the max correct response can often provide the best measure of an infant's perceptual sensitivity. PMID:26237298
Generic distortion model for metrology under optical microscopes
NASA Astrophysics Data System (ADS)
Liu, Xingjian; Li, Zhongwei; Zhong, Kai; Chao, YuhJin; Miraldo, Pedro; Shi, Yusheng
2018-04-01
For metrology under optical microscopes, lens distortion is the dominant source of error. Previous distortion models and correction methods mostly rely on the assumption that parametric distortion models require a priori knowledge of the microscopes' lens systems. However, because of the numerous optical elements in a microscope, distortions can be hardly represented by a simple parametric model. In this paper, a generic distortion model considering both symmetric and asymmetric distortions is developed. Such a model is obtained by using radial basis functions (RBFs) to interpolate the radius and distortion values of symmetric distortions (image coordinates and distortion rays for asymmetric distortions). An accurate and easy to implement distortion correction method is presented. With the proposed approach, quantitative measurement with better accuracy can be achieved, such as in Digital Image Correlation for deformation measurement when used with an optical microscope. The proposed technique is verified by both synthetic and real data experiments.
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong
2016-05-01
With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.
Loucas, Bradford D.; Shuryak, Igor; Cornforth, Michael N.
2016-01-01
Whole-chromosome painting (WCP) typically involves the fluorescent staining of a small number of chromosomes. Consequently, it is capable of detecting only a fraction of exchanges that occur among the full complement of chromosomes in a genome. Mathematical corrections are commonly applied to WCP data in order to extrapolate the frequency of exchanges occurring in the entire genome [whole-genome equivalency (WGE)]. However, the reliability of WCP to WGE extrapolations depends on underlying assumptions whose conditions are seldom met in actual experimental situations, in particular the presumed absence of complex exchanges. Using multi-fluor fluorescence in situ hybridization (mFISH), we analyzed the induction of simple exchanges produced by graded doses of 137Cs gamma rays (0–4 Gy), and also 1.1 GeV 56Fe ions (0–1.5 Gy). In order to represent cytogenetic damage as it would have appeared to the observer following standard three-color WCP, all mFISH information pertaining to exchanges that did not specifically involve chromosomes 1, 2, or 4 was ignored. This allowed us to reconstruct dose–responses for three-color apparently simple (AS) exchanges. Using extrapolation methods similar to those derived elsewhere, these were expressed in terms of WGE for comparison to mFISH data. Based on AS events, the extrapolated frequencies systematically overestimated those actually observed by mFISH. For gamma rays, these errors were practically independent of dose. When constrained to a relatively narrow range of doses, the WGE corrections applied to both 56Fe and gamma rays predicted genome-equivalent damage with a level of accuracy likely sufficient for most applications. However, the apparent accuracy associated with WCP to WGE corrections is both fortuitous and misleading. This is because (in normal practice) such corrections can only be applied to AS exchanges, which are known to include complex aberrations in the form of pseudosimple exchanges. When WCP to WGE corrections are applied to true simple exchanges, the results are less than satisfactory, leading to extrapolated values that underestimate the true WGE response by unacceptably large margins. Likely explanations for these results are discussed, as well as their implications for radiation protection. Thus, in seeming contradiction to notion that complex aberrations be avoided altogether in WGE corrections – and in violation of assumptions upon which these corrections are based – their inadvertent inclusion in three-color WCP data is actually required in order for them to yield even marginally acceptable results. PMID:27014627
Janneck, C
1980-04-01
A new operative procedure is described for bat ear abnormality, resulting from isolated hypertrophy of the concha or by a ventriposition of the whole ear. The procedure is described in detail and its advantages are stressed. It is a short operation which does not leave a visible scar and achieves reliable results. The post-operative findings in 11 patients are reported.
A hybrid multigroup neutron-pattern model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
In this paper, we use the general approach to construct a multigroup hybrid model for the neutron pattern. The equations are given together with a reasonably economic and simple iterative method of solving them. The algorithm can be used to calculate the pattern and the functionals as well as to correct the constants from the experimental data and to adapt the support over the constants to the engineering programs by reference to precision ones.
A simple method of obtaining concentration depth-profiles from X-ray diffraction
NASA Technical Reports Server (NTRS)
Wiedemann, K. E.; Unnam, J.
1984-01-01
The construction of composition profiles from X-ray intensity bands was investigated. The intensity band-to-composition profile transformation utilizes a solution which can be easily evaluated. The technique can be applied to thin films and thick speciments for which the variation of lattice parameters, linear absorption coefficient, and reflectivity with composition are known. A deconvolution scheme with corrections for the instrumental broadening and ak-alfadoublet is discussed.
Image-based spectral distortion correction for photon-counting x-ray detectors
Ding, Huanjun; Molloi, Sabee
2012-01-01
Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608
Infrared fix pattern noise reduction method based on Shearlet Transform
NASA Astrophysics Data System (ADS)
Rong, Shenghui; Zhou, Huixin; Zhao, Dong; Cheng, Kuanhong; Qian, Kun; Qin, Hanlin
2018-06-01
The non-uniformity correction (NUC) is an effective way to reduce fix pattern noise (FPN) and improve infrared image quality. The temporal high-pass NUC method is a kind of practical NUC method because of its simple implementation. However, traditional temporal high-pass NUC methods rely deeply on the scene motion and suffer image ghosting and blurring. Thus, this paper proposes an improved NUC method based on Shearlet Transform (ST). First, the raw infrared image is decomposed into multiscale and multi-orientation subbands by ST and the FPN component mainly exists in some certain high-frequency subbands. Then, high-frequency subbands are processed by the temporal filter to extract the FPN due to its low-frequency characteristics. Besides, each subband has a confidence parameter to determine the degree of FPN, which is estimated by the variance of subbands adaptively. At last, the process of NUC is achieved by subtracting the estimated FPN component from the original subbands and the corrected infrared image can be obtained by the inverse ST. The performance of the proposed method is evaluated with real and synthetic infrared image sequences thoroughly. Experimental results indicate that the proposed method can reduce heavily FPN with less roughness and RMSE.
NASA Astrophysics Data System (ADS)
Messica, A.
2016-10-01
The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.
3D visualization of two-phase flow in the micro-tube by a simple but effective method
NASA Astrophysics Data System (ADS)
Fu, X.; Zhang, P.; Hu, H.; Huang, C. J.; Huang, Y.; Wang, R. Z.
2009-08-01
The present study provides a simple but effective method for 3D visualization of the two-phase flow in the micro-tube. An isosceles right-angle prism combined with a mirror located 45° bevel to the prism is employed to synchronously obtain the front and side views of the flow patterns with a single camera, where the locations of the prism and the micro-tube for clear imaging should satisfy a fixed relationship which is specified in the present study. The optical design is proven successfully by the tough visualization work at the cryogenic temperature range. The image deformation due to the refraction and geometrical configuration of the test section is quantitatively investigated. It is calculated that the image is enlarged by about 20% in inner diameter compared to the real object, which is validated by the experimental results. Meanwhile, the image deformation by adding a rectangular optical correction box outside the circular tube is comparatively investigated. It is calculated that the image is reduced by about 20% in inner diameter with a rectangular optical correction box compared to the real object. The 3D re-construction process based on the two views is conducted through three steps, which shows that the 3D visualization method can easily be applied for two-phase flow research in micro-scale channels and improves the measurement accuracy of some important parameters of the two-phase flow such as void fraction, spatial distribution of bubbles, etc.
A simplified method for active-site titration of lipases immobilised on hydrophobic supports.
Nalder, Tim D; Kurtovic, Ivan; Barrow, Colin J; Marshall, Susan N
2018-06-01
The aim of this work was to develop a simple and accurate protocol to measure the functional active site concentration of lipases immobilised on highly hydrophobic supports. We used the potent lipase inhibitor methyl 4-methylumbelliferyl hexylphosphonate to titrate the active sites of Candida rugosa lipase (CrL) bound to three highly hydrophobic supports: octadecyl methacrylate (C18), divinylbenzene crosslinked methacrylate (DVB) and styrene. The method uses correction curves to take into account the binding of the fluorophore (4-methylumbelliferone, 4-MU) by the support materials. We showed that the uptake of the detection agent by the three supports is not linear relative to the weight of the resin, and that the uptake occurs in an equilibrium that is independent of the total fluorophore concentration. Furthermore, the percentage of bound fluorophore varied among the supports, with 50 mg of C18 and styrene resins binding approximately 64 and 94%, respectively. When the uptake of 4-MU was calculated and corrected for, the total 4-MU released via inhibition (i.e. the concentration of functional lipase active sites) could be determined via a linear relationship between immobilised lipase weight and total inhibition. It was found that the functional active site concentration of immobilised CrL varied greatly among different hydrophobic supports, with 56% for C18, compared with 14% for DVB. The described method is a simple and robust approach to measuring functional active site concentration in immobilised lipase samples. Copyright © 2018 Elsevier Inc. All rights reserved.
Using computational modeling of river flow with remotely sensed data to infer channel bathymetry
Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.
2012-01-01
As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.
Incorporating the gas analyzer response time in gas exchange computations.
Mitchell, R R
1979-11-01
A simple method for including the gas analyzer response time in the breath-by-breath computation of gas exchange rates is described. The method uses a difference equation form of a model for the gas analyzer in the computation of oxygen uptake and carbon dioxide production and avoids a numerical differentiation required to correct the gas fraction wave forms. The effect of not accounting for analyzer response time is shown to be a 20% underestimation in gas exchange rate. The present method accurately measures gas exchange rate, is relatively insensitive to measurement errors in the analyzer time constant, and does not significantly increase the computation time.
A method of solving tilt illumination for multiple distance phase retrieval
NASA Astrophysics Data System (ADS)
Guo, Cheng; Li, Qiang; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun
2018-07-01
Multiple distance phase retrieval is a technique of using a series of intensity patterns to reconstruct a complex-valued image of object. However, tilt illumination originating from the off-axis displacement of incident light significantly impairs its imaging quality. To eliminate this affection, we use cross-correlation calibration to estimate oblique angle of incident light and a Fourier-based strategy to correct tilted illumination effect. Compared to other methods, binary and biological object are both stably reconstructed in simulation and experiment. This work provides a simple but beneficial method to solve the problem of tilt illumination for lens-free multi-distance system.
Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.
2016-01-01
To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691
Floristic composition and across-track reflectance gradient in Landsat images over Amazonian forests
NASA Astrophysics Data System (ADS)
Muro, Javier; doninck, Jasper Van; Tuomisto, Hanna; Higgins, Mark A.; Moulatlet, Gabriel M.; Ruokolainen, Kalle
2016-09-01
Remotely sensed image interpretation or classification of tropical forests can be severely hampered by the effects of the bidirectional reflection distribution function (BRDF). Even for narrow swath sensors like Landsat TM/ETM+, the influence of reflectance anisotropy can be sufficiently strong to introduce a cross-track reflectance gradient. If the BRDF could be assumed to be linear for the limited swath of Landsat, it would be possible to remove this gradient during image preprocessing using a simple empirical method. However, the existence of natural gradients in reflectance caused by spatial variation in floristic composition of the forest can restrict the applicability of such simple corrections. Here we use floristic information over Peruvian and Brazilian Amazonia acquired through field surveys, complemented with information from geological maps, to investigate the interaction of real floristic gradients and the effect of reflectance anisotropy on the observed reflectances in Landsat data. In addition, we test the assumption of linearity of the BRDF for a limited swath width, and whether different primary non-inundated forest types are characterized by different magnitudes of the directional reflectance gradient. Our results show that a linear function is adequate to empirically correct for view angle effects, and that the magnitude of the across-track reflectance gradient is independent of floristic composition in the non-inundated forests we studied. This makes a routine correction of view angle effects possible. However, floristic variation complicates the issue, because different forest types have different mean reflectances. This must be taken into account when deriving the correction function in order to avoid eliminating natural gradients.
Using "Tracker" to Prove the Simple Harmonic Motion Equation
ERIC Educational Resources Information Center
Kinchin, John
2016-01-01
Simple harmonic motion (SHM) is a common topic for many students to study. Using the free, though versatile, motion tracking software; "Tracker", we can extend the students experience and show that the general equation for SHM does lead to the correct period of a simple pendulum.
Probabilistic Component Mode Synthesis of Nondeterministic Substructures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1996-01-01
Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. We present a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.
Tracking of electrochemical impedance of batteries
NASA Astrophysics Data System (ADS)
Piret, H.; Granjon, P.; Guillet, N.; Cattin, V.
2016-04-01
This paper presents an evolutionary battery impedance estimation method, which can be easily embedded in vehicles or nomad devices. The proposed method not only allows an accurate frequency impedance estimation, but also a tracking of its temporal evolution contrary to classical electrochemical impedance spectroscopy methods. Taking into account constraints of cost and complexity, we propose to use the existing electronics of current control to perform a frequency evolutionary estimation of the electrochemical impedance. The developed method uses a simple wideband input signal, and relies on a recursive local average of Fourier transforms. The averaging is controlled by a single parameter, managing a trade-off between tracking and estimation performance. This normalized parameter allows to correctly adapt the behavior of the proposed estimator to the variations of the impedance. The advantage of the proposed method is twofold: the method is easy to embed into a simple electronic circuit, and the battery impedance estimator is evolutionary. The ability of the method to monitor the impedance over time is demonstrated on a simulator, and on a real Lithium ion battery, on which a repeatability study is carried out. The experiments reveal good tracking results, and estimation performance as accurate as the usual laboratory approaches.
NASA Astrophysics Data System (ADS)
Shen, Xiang; Liu, Bin; Li, Qing-Quan
2017-03-01
The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.
A correlated meta-analysis strategy for data mining "OMIC" scans.
Province, Michael A; Borecki, Ingrid B
2013-01-01
Meta-analysis is becoming an increasingly popular and powerful tool to integrate findings across studies and OMIC dimensions. But there is the danger that hidden dependencies between putatively "independent" studies can cause inflation of type I error, due to reinforcement of the evidence from false-positive findings. We present here a simple method for conducting meta-analyses that automatically estimates the degree of any such non-independence between OMIC scans and corrects the inference for it, retaining the proper type I error structure. The method does not require the original data from the source studies, but operates only on summary analysis results from these in OMIC scans. The method is applicable in a wide variety of situations including combining GWAS and or sequencing scan results across studies with dependencies due to overlapping subjects, as well as to scans of correlated traits, in a meta-analysis scan for pleiotropic genetic effects. The method correctly detects which scans are actually independent in which case it yields the traditional meta-analysis, so it may safely be used in all cases, when there is even a suspicion of correlation amongst scans.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Comparison of ring artifact removal methods using flat panel detector based CT images
2011-01-01
Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411
Surgical versus accommodative treatment for Charcot arthropathy of the midfoot.
Pinzur, Michael
2004-08-01
The treatment of Charcot foot arthropathy is one of the most controversial issues facing orthopaedic foot and ankle surgeons. Although current orthopaedic textbooks are in almost universal agreement that treatment should be nonoperative, accommodating the deformity with orthotic methods, most peer-reviewed clinical studies recommend early surgical correction of the deformity. In a university health system orthopaedic foot and ankle clinic with a special interest in diabetic foot disorders, a moderate approach evolved for management of this difficult patient population. Patients with Charcot arthropathy and plantigrade feet were treated with accommodative orthotic methods. Those with nonplantigrade feet were treated with surgical correction of the deformity, followed by long-term management with commercial therapeutic footwear. The desired outcome for both groups was long-term management with standard, commercially available, therapeutic depth-inlay shoes and custom-fabricated accommodative foot orthoses. During a 6-year period, 198 patients (201 feet) were treated for diabetes-associated Charcot foot arthropathy. The location of the deformity was in the midfoot in 147 feet, in the ankle in 50, and in the forefoot in four. At a minimum 1-year follow-up, 87 of the 147 feet with midfoot disease (59.2%) achieved the desired endpoint without surgical intervention. Sixty (40.8%) required surgery. Corrective osteotomy with or without arthrodesis was attempted in 42, while debridement or simple exostectomy was attempted in 18 feet. Three patients had initial amputation (one partial foot amputation, one Syme ankle disarticulation, and one transtibial amputation), and five had amputation (two Syme ankle disarticulations and three transtibial amputations) after attempted salvage failed. Using a simple treatment protocol with the desired endpoint being long-term management with commercially available, therapeutic footwear and custom foot orthoses, more than half of patients with Charcot arthropathy at the midfoot level can be successfully managed without surgery.
Fourier-based classification of protein secondary structures.
Shu, Jian-Jun; Yong, Kian Yan
2017-04-15
The correct prediction of protein secondary structures is one of the key issues in predicting the correct protein folded shape, which is used for determining gene function. Existing methods make use of amino acids properties as indices to classify protein secondary structures, but are faced with a significant number of misclassifications. The paper presents a technique for the classification of protein secondary structures based on protein "signal-plotting" and the use of the Fourier technique for digital signal processing. New indices are proposed to classify protein secondary structures by analyzing hydrophobicity profiles. The approach is simple and straightforward. Results show that the more types of protein secondary structures can be classified by means of these newly-proposed indices. Copyright © 2017 Elsevier Inc. All rights reserved.
Correcting For Seed-Particle Lag In LV Measurements
NASA Technical Reports Server (NTRS)
Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.
1994-01-01
Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.
Ensemble stacking mitigates biases in inference of synaptic connectivity.
Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N
2018-01-01
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
The 'ABC' of examining foot radiographs.
Pearse, Eyiyemi O; Klass, Benjamin; Bendall, Stephen P
2005-11-01
We report a simple systematic method of assessing foot radiographs that improves diagnostic accuracy and can reduce the incidence of inappropriate management of serious forefoot and midfoot injuries, particularly the Lisfranc-type injury. Five recently appointed senior house officers (SHOs), with no casualty or Orthopaedic experience prior to their appointment, were shown a set of 10 foot radiographs and told the history and examination findings recorded in the casualty notes of each patient within 6 weeks of taking up their posts. They were informed that the radiographs might or might not demonstrate an abnormality. They were asked to make a diagnosis and decide on a management plan. The test was repeated after they were taught the 'ABC' method of evaluating foot radiographs. Diagnostic accuracy improved after SHOs were taught a systematic method of assessing foot radiographs. The proportion of correct diagnoses increased from 0.64 to 0.78 and the probability of recognising Lisfranc injuries increased from 0 to 0.6. The use of this simple method of assessing foot radiographs can reduce the incidence of inappropriate management of serious foot injuries by casualty SHOs, in particular the Lisfranc type injury.
Analytical and multibody modeling for the power analysis of standing jumps.
Palmieri, G; Callegari, M; Fioretti, S
2015-01-01
Two methods for the power analysis of standing jumps are proposed and compared in this article. The first method is based on a simple analytical formulation which requires as input the coordinates of the center of gravity in three specified instants of the jump. The second method is based on a multibody model that simulates the jumps processing the data obtained by a three-dimensional (3D) motion capture system and the dynamometric measurements obtained by the force platforms. The multibody model is developed with OpenSim, an open-source software which provides tools for the kinematic and dynamic analyses of 3D human body models. The study is focused on two of the typical tests used to evaluate the muscular activity of lower limbs, which are the counter movement jump and the standing long jump. The comparison between the results obtained by the two methods confirms that the proposed analytical formulation is correct and represents a simple tool suitable for a preliminary analysis of total mechanical work and the mean power exerted in standing jumps.
Griffiths, Nia W; Wyatt, Mark F; Kean, Suzanna D; Graham, Andrew E; Stein, Bridget K; Brenton, A Gareth
2010-06-15
A method for the accurate mass measurement of positive radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. Initial use of a conjugated oligomeric calibration material was rejected in favour of a series of meso-tetraalkyl/tetraalkylaryl-functionalised porphyrins, from which the two calibrants required for a particular accurate mass measurement were chosen. While all measurements of monoisotopic species were within +/-5 ppm, and the method was rigorously validated using chemometrics, mean values of five measurements were used for extra confidence in the generation of potential elemental formulae. Potential difficulties encountered when measuring compounds containing multi-isotopic elements are discussed, where the monoisotopic peak is no longer the lowest mass peak, and a simple mass-correction solution can be applied. The method requires no significant expertise to implement, but care and attention is required to obtain valid measurements. The method is operationally simple and will prove useful to the analytical chemistry community. Copyright (c) 2010 John Wiley & Sons, Ltd.
A Simple Method to Improve Autonomous GPS Positioning for Tractors
Gomez-Gil, Jaime; Alonso-Garcia, Sergio; Gómez-Gil, Francisco Javier; Stombaugh, Tim
2011-01-01
Error is always present in the GPS guidance of a tractor along a desired trajectory. One way to reduce GPS guidance error is by improving the tractor positioning. The most commonly used ways to do this are either by employing more precise GPS receivers and differential corrections or by employing GPS together with some other local positioning systems such as electronic compasses or Inertial Navigation Systems (INS). However, both are complex and expensive solutions. In contrast, this article presents a simple and low cost method to improve tractor positioning when only a GPS receiver is used as the positioning sensor. The method is based on placing the GPS receiver ahead of the tractor, and on applying kinematic laws of tractor movement, or a geometric approximation, to obtain the midpoint position and orientation of the tractor rear axle more precisely. This precision improvement is produced by the fusion of the GPS data with tractor kinematic control laws. Our results reveal that the proposed method effectively reduces the guidance GPS error along a straight trajectory. PMID:22163917
Data and methodological problems in establishing state gasoline-conservation targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, D.L.; Walton, G.H.
The Emergency Energy Conservation Act of 1979 gives the President the authority to set gasoline-conservation targets for states in the event of a supply shortage. This paper examines data and methodological problems associated with setting state gasoline-conservation targets. The target-setting method currently used is examined and found to have some flaws. Ways of correcting these deficiencies through the use of Box-Jenkins time-series analysis are investigated. A successful estimation of Box-Jenkins models for all states included the estimation of the magnitude of the supply shortages of 1979 in each state and a preliminary estimation of state short-run price elasticities, which weremore » found to vary about a median value of -0.16. The time-series models identified were very simple in structure and lent support to the simple consumption growth model assumed by the current target method. The authors conclude that the flaws in the current method can be remedied either by replacing the current procedures with time-series models or by using the models in conjunction with minor modifications of the current method.« less
Scatter correction method for x-ray CT using primary modulation: Phantom studies
Gao, Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun, Mingshan; Star-Lack, Josh; Zhu, Lei
2010-01-01
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan©600 phantom, an anthropomorphic chest phantom, and the Catphan©600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan©600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan©600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy. PMID:20229902
Predicting helix orientation for coiled-coil dimers
Apgar, James R.; Gutwin, Karl N.; Keating, Amy E.
2008-01-01
The alpha-helical coiled coil is a structurally simple protein oligomerization or interaction motif consisting of two or more alpha helices twisted into a supercoiled bundle. Coiled coils can differ in their stoichiometry, helix orientation and axial alignment. Because of the near degeneracy of many of these variants, coiled coils pose a challenge to fold recognition methods for structure prediction. Whereas distinctions between some protein folds can be discriminated on the basis of hydrophobic/polar patterning or secondary structure propensities, the sequence differences that encode important details of coiled-coil structure can be subtle. This is emblematic of a larger problem in the field of protein structure and interaction prediction: that of establishing specificity between closely similar structures. We tested the behavior of different computational models on the problem of recognizing the correct orientation - parallel vs. antiparallel - of pairs of alpha helices that can form a dimeric coiled coil. For each of 131 examples of known structure, we constructed a large number of both parallel and antiparallel structural models and used these to asses the ability of five energy functions to recognize the correct fold. We also developed and tested three sequenced-based approaches that make use of varying degrees of implicit structural information. The best structural methods performed similarly to the best sequence methods, correctly categorizing ∼81% of dimers. Steric compatibility with the fold was important for some coiled coils we investigated. For many examples, the correct orientation was determined by smaller energy differences between parallel and antiparallel structures distributed over many residues and energy components. Prediction methods that used structure but incorporated varying approximations and assumptions showed quite different behaviors when used to investigate energetic contributions to orientation preference. Sequence based methods were sensitive to the choice of residue-pair interactions scored. PMID:18506779
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ma, Tianyu; Shao, Yiping
2008-08-01
This work is part of a feasibility study to develop SPECT imaging capability on a lutetium oxyorthosilicate (LSO) based animal PET system. The SPECT acquisition was enabled by inserting a collimator assembly inside the detector ring and acquiring data in singles mode. The same LSO detectors were used for both PET and SPECT imaging. The intrinsic radioactivity of 176Lu in the LSO crystals, however, contaminates the SPECT data, and can generate image artifacts and introduce quantification error. The objectives of this study were to evaluate the effectiveness of a LSO background subtraction method, and to estimate the minimal detectable target activity (MDTA) of image object for SPECT imaging. For LSO background correction, the LSO contribution in an image study was estimated based on a pre-measured long LSO background scan and subtracted prior to the image reconstruction. The MDTA was estimated in two ways. The empirical MDTA (eMDTA) was estimated from screening the tomographic images at different activity levels. The calculated MDTA (cMDTA) was estimated from using a formula based on applying a modified Currie equation on an average projection dataset. Two simulated and two experimental phantoms with different object activity distributions and levels were used in this study. The results showed that LSO background adds concentric ring artifacts to the reconstructed image, and the simple subtraction method can effectively remove these artifacts—the effect of the correction was more visible when the object activity level was near or above the eMDTA. For the four phantoms studied, the cMDTA was consistently about five times of the corresponding eMDTA. In summary, we implemented a simple LSO background subtraction method and demonstrated its effectiveness. The projection-based calculation formula yielded MDTA results that closely correlate with that obtained empirically and may have predicative value for imaging applications.
NASA Technical Reports Server (NTRS)
Reimers, J. R.; Heller, E. J.
1985-01-01
Exact eigenfunctions for a two-dimensional rigid rotor are obtained using Gaussian wave packet dynamics. The wave functions are obtained by propagating, without approximation, an infinite set of Gaussian wave packets that collectively have the correct periodicity, being coherent states appropriate to this rotational problem. This result leads to a numerical method for the semiclassical calculation of rovibrational, molecular eigenstates. Also, a simple, almost classical, approximation to full wave packet dynamics is shown to give exact results: this leads to an a posteriori justification of the De Leon-Heller spectral quantization method.
Finite Size Corrections to the Parisi Overlap Function in the GREM
NASA Astrophysics Data System (ADS)
Derrida, Bernard; Mottishaw, Peter
2018-01-01
We investigate the effects of finite size corrections on the overlap probabilities in the Generalized Random Energy Model in two situations where replica symmetry is broken in the thermodynamic limit. Our calculations do not use replicas, but shed some light on what the replica method should give for finite size corrections. In the gradual freezing situation, which is known to exhibit full replica symmetry breaking, we show that the finite size corrections lead to a modification of the simple relations between the sample averages of the overlaps Y_k between k configurations predicted by replica theory. This can be interpreted as fluctuations in the replica block size with a negative variance. The mechanism is similar to the one we found recently in the random energy model in Derrida and Mottishaw (J Stat Mech 2015(1): P01021, 2015). We also consider a simultaneous freezing situation, which is known to exhibit one step replica symmetry breaking. We show that finite size corrections lead to full replica symmetry breaking and give a more complete derivation of the results presented in Derrida and Mottishaw (Europhys Lett 115(4): 40005, 2016) for the directed polymer on a tree.
A Novel Method of High Accuracy, Wavefront Phase and Amplitude Correction for Coronagraphy
NASA Technical Reports Server (NTRS)
Bowers, Charles W.; Woodgate, Bruce E.; Lyon, Richard G.
2003-01-01
Detection of extra-solar, and especially terrestrial-like planets, using coronagraphy requires an extremely high level of wavefront correction. For example, the study of Woodruff et al. (2002) has shown that phase uniformity of order 10(exp -4)lambda(rms) must be achieved over the critical range of spatial frequencies to produce the approx. 10(exp 10) contrast needed for the Terrestrial Planet Finder (TPF) mission. Correction of wavefront phase errors to this level may be accomplished by using a very high precision deformable mirror (DM). However, not only phase but also amplitude uniformity of the same scale (approx. 10(exp -4)) and over the same spatial frequency range must be simultaneously obtained to remove all residual speckle in the image plane. We present a design for producing simultaneous wavefront phase and amplitude uniformity to high levels from an input wavefront of lower quality. The design uses a dual Michelson interferometer arrangement incorporating two DM and a single, fixed mirror (all at pupils) and two beamsplitters: one with unequal (asymmetric) beam splitting and one with symmetric beam splitting. This design allows high precision correction of both phase and amplitude using DM with relatively coarse steps and permits a simple correction algorithm.
Simple Additive Weighting to Diagnose Rabbit Disease
NASA Astrophysics Data System (ADS)
Ramadiani; Marissa, Dyna; Jundillah, Muhammad Labib; Azainil; Hatta, Heliza Rahmania
2018-02-01
Rabbit is one of the many pets maintained by the general public in Indonesia. Like other pet, rabbits are also susceptible to various diseases. Society in general does not understand correctly the type of rabbit disease and the way of treatment. To help care for sick rabbits it is necessary a decision support system recommendation diagnosis of rabbit disease. The purpose of this research is to make the application of rabbit disease diagnosis system so that can help user in taking care of rabbit. This application diagnoses the disease by tracing the symptoms and calculating the recommendation of the disease using Simple Additive Weighting method. This research produces a web-based decision support system that is used to help rabbit breeders and the general public.
A simple test of association for contingency tables with multiple column responses.
Decady, Y J; Thomas, D R
2000-09-01
Loughin and Scherer (1998, Biometrics 54, 630-637) investigated tests of association in two-way tables when one of the categorical variables allows for multiple-category responses from individual respondents. Standard chi-squared tests are invalid in this case, and they developed a bootstrap test procedure that provides good control of test levels under the null hypothesis. This procedure and some others that have been proposed are computationally involved and are based on techniques that are relatively unfamiliar to many practitioners. In this paper, the methods introduced by Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) for analyzing complex survey data are used to develop a simple test based on a corrected chi-squared statistic.
Physical condition for elimination of ambiguity in conditionally convergent lattice sums
NASA Astrophysics Data System (ADS)
Young, K.
1987-02-01
The conditional convergence of the lattice sum defining the Madelung constant gives rise to an ambiguity in its value. It is shown that this ambiguity is related, through a simple and universal integral, to the average charge density on the crystal surface. The physically correct value is obtained by setting the charge density to zero. A simple and universally applicable formula for the Madelung constant is derived as a consequence. It consists of adding up dipole-dipole energies together with a nontrivial correction term.
A simple method for electron energy constancy measurement
King, R. Paul; Anderson, R. Scott
2001-01-01
A device is described for use in confirming the energy constancy of clinical electron beams. A wedge shaped absorber is placed over an ionization chamber leading to an energy dependent response. A measurement under the energy filter is divided by a measurement in air to correct for the inherent energy dependence of the chamber. A nearly linear response is demonstrated. PACS number(s): 87.52.–g, 87.53.–j, 87.66.–a PMID:11674838
Analysis of precision and accuracy in a simple model of machine learning
NASA Astrophysics Data System (ADS)
Lee, Julian
2017-12-01
Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.
NASA Astrophysics Data System (ADS)
Ohmae, Etsuko; Nishio, Shinichiro; Oda, Motoki; Suzuki, Hiroaki; Suzuki, Toshihiko; Ohashi, Kyoichi; Koga, Shunsaku; Yamashita, Yutaka; Watanabe, Hiroshi
2014-06-01
Near-infrared spectroscopy (NIRS) has been used for noninvasive assessment of oxygenation in living tissue. For muscle measurements by NIRS, the measurement sensitivity to muscle (S) is strongly influenced by fat thickness (FT). In this study, we investigated the influence of FT and developed a correction curve for S with an optode distance (3 cm) sufficiently large to probe the muscle. First, we measured the hemoglobin concentration in the forearm (n=36) and thigh (n=6) during arterial occlusion using a time-resolved spectroscopy (TRS) system, and then FT was measured by ultrasound. The correction curve was derived from the ratio of partial mean optical path length of the muscle layer
A technique for the correcting ERTS data for solar and atmospheric effects
NASA Technical Reports Server (NTRS)
Rogers, R. H.; Peacock, K.
1973-01-01
A technique is described by which an ERTS investigator can obtain absolute target reflectances by correcting spacecraft radiance measurements for variable target irradiance, atmospheric attenuation, and atmospheric backscatter. A simple measuring instrument and the necessary atmospheric measurements are discussed, and examples demonstrate the nature and magnitude of the atmospheric corrections.
Christensen, Chloe L.; Choy, Francis Y. M.
2017-01-01
Ease of design, relatively low cost and a multitude of gene-altering capabilities have all led to the adoption of the sophisticated and yet simple gene editing system: clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9). The CRISPR/Cas9 system holds promise for the correction of deleterious mutations by taking advantage of the homology directed repair pathway and by supplying a correction template to the affected patient’s cells. Currently, this technique is being applied in vitro in human-induced pluripotent stem cells (iPSCs) to correct a variety of severe genetic diseases, but has not as of yet been used in iPSCs derived from patients affected with a lysosomal storage disease (LSD). If adopted into clinical practice, corrected iPSCs derived from cells that originate from the patient themselves could be used for therapeutic amelioration of LSD symptoms without the risks associated with allogeneic stem cell transplantation. CRISPR/Cas9 editing in a patient’s cells would overcome the costly, lifelong process associated with currently available treatment methods, including enzyme replacement and substrate reduction therapies. In this review, the overall utility of the CRISPR/Cas9 gene editing technique for treatment of genetic diseases, the potential for the treatment of LSDs and methods currently employed to increase the efficiency of this re-engineered biological system will be discussed. PMID:28933359
Statistical Selection of Biological Models for Genome-Wide Association Analyses.
Bi, Wenjian; Kang, Guolian; Pounds, Stanley B
2018-05-24
Genome-wide association studies have discovered many biologically important associations of genes with phenotypes. Typically, genome-wide association analyses formally test the association of each genetic feature (SNP, CNV, etc) with the phenotype of interest and summarize the results with multiplicity-adjusted p-values. However, very small p-values only provide evidence against the null hypothesis of no association without indicating which biological model best explains the observed data. Correctly identifying a specific biological model may improve the scientific interpretation and can be used to more effectively select and design a follow-up validation study. Thus, statistical methodology to identify the correct biological model for a particular genotype-phenotype association can be very useful to investigators. Here, we propose a general statistical method to summarize how accurately each of five biological models (null, additive, dominant, recessive, co-dominant) represents the data observed for each variant in a GWAS study. We show that the new method stringently controls the false discovery rate and asymptotically selects the correct biological model. Simulations of two-stage discovery-validation studies show that the new method has these properties and that its validation power is similar to or exceeds that of simple methods that use the same statistical model for all SNPs. Example analyses of three data sets also highlight these advantages of the new method. An R package is freely available at www.stjuderesearch.org/site/depts/biostats/maew. Copyright © 2018. Published by Elsevier Inc.
Sloan, N L; Rosen, D; de la Paz, T; Arita, M; Temalilwa, C; Solomons, N W
1997-02-01
The prevalence of vitamin A deficiency has traditionally been assessed through xerophthalmia or biochemical surveys. The cost and complexity of implementing these methods limits the ability of nonresearch organizations to identify vitamin A deficiency. This study examined the validity of a simple, inexpensive food frequency method to identify areas with a high prevalence of vitamin A deficiency. The validity of the method was tested in 15 communities, 5 each from the Philippines, Guatemala, and Tanzania. Serum retinol concentrations of less than 20 micrograms/dL defined vitamin A deficiency. Weighted measures of vitamin A intake six or fewer times per week and unweighted measures of consumption of animal sources of vitamin A four or fewer times per week correctly classified seven of eight communities as having a high prevalence of vitamin A deficiency (i.e., 15% or more preschool-aged children in the community had the deficiency) (sensitivity = 87.5%) and four of seven communities as having a low prevalence (specificity = 57.1%). This method correctly classified the vitamin A deficiency status of 73.3% of the communities but demonstrated a high false-positive rate (42.9%).
Coello, Christopher; Willoch, Frode; Selnes, Per; Gjerstad, Leif; Fladby, Tormod; Skretting, Arne
2013-05-15
A voxel-based algorithm to correct for partial volume effect in PET brain volumes is presented. This method (named LoReAn) is based on MRI based segmentation of anatomical regions and accurate measurements of the effective point spread function of the PET imaging process. The objective is to correct for the spill-out of activity from high-uptake anatomical structures (e.g. grey matter) into low-uptake anatomical structures (e.g. white matter) in order to quantify physiological uptake in the white matter. The new algorithm is presented and validated against the state of the art region-based geometric transfer matrix (GTM) method with synthetic and clinical data. Using synthetic data, both bias and coefficient of variation were improved in the white matter region using LoReAn compared to GTM. An increased number of anatomical regions doesn't affect the bias (<5%) and misregistration affects equally LoReAn and GTM algorithms. The LoReAn algorithm appears to be a simple and promising voxel-based algorithm for studying metabolism in white matter regions. Copyright © 2013 Elsevier Inc. All rights reserved.
Anisotropic field-of-view shapes for improved PROPELLER imaging☆
Larson, Peder E.Z.; Lustig, Michael S.; Nishimura, Dwight G.
2010-01-01
The Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) method for magnetic resonance imaging data acquisition and reconstruction has the highly desirable property of being able to correct for motion during the scan, making it especially useful for imaging pediatric or uncooperative patients and diffusion imaging. This method nominally supports a circular field of view (FOV), but tailoring the FOV for noncircular shapes results in more efficient, shorter scans. This article presents new algorithms for tailoring PROPELLER acquisitions to the desired FOV shape and size that are flexible and precise. The FOV design also allows for rotational motion which provides better motion correction and reduced aliasing artifacts. Some possible FOV shapes demonstrated are ellipses, ovals and rectangles, and any convex, pi-symmetric shape can be designed. Standard PROPELLER reconstruction is used with minor modifications, and results with simulated motion presented confirm the effectiveness of the motion correction with these modified FOV shapes. These new acquisition design algorithms are simple and fast enough to be computed for each individual scan. Also presented are algorithms for further scan time reductions in PROPELLER echo-planar imaging (EPI) acquisitions by varying the sample spacing in two directions within each blade. PMID:18818039
NASA Astrophysics Data System (ADS)
Nery, Jean Paul; Allen, Philip B.
2016-09-01
We develop a simple method to study the zero-point and thermally renormalized electron energy ɛk n(T ) for k n the conduction band minimum or valence maximum in polar semiconductors. We use the adiabatic approximation, including an imaginary broadening parameter i δ to suppress noise in the density-functional integrations. The finite δ also eliminates the polar divergence which is an artifact of the adiabatic approximation. Nonadiabatic Fröhlich polaron methods then provide analytic expressions for the missing part of the contribution of the problematic optical phonon mode. We use this to correct the renormalization obtained from the adiabatic approximation. Test calculations are done for zinc-blende GaN for an 18 ×18 ×18 integration grid. The Fröhlich correction is of order -0.02 eV for the zero-point energy shift of the conduction band minimum, and +0.03 eV for the valence band maximum; the correction to renormalization of the 3.28 eV gap is -0.05 eV, a significant fraction of the total zero point renormalization of -0.15 eV.
Construction of an unmanned aerial vehicle remote sensing system for crop monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seungtaek; Ko, Jonghan; Kim, Mijeong; Kim, Jongkwon
2016-04-01
We constructed a lightweight unmanned aerial vehicle (UAV) remote sensing system and determined the ideal method for equipment setup, image acquisition, and image processing. Fields of rice paddy (Oryza sativa cv. Unkwang) grown under three different nitrogen (N) treatments of 0, 50, or 115 kg/ha were monitored at Chonnam National University, Gwangju, Republic of Korea, in 2013. A multispectral camera was used to acquire UAV images from the study site. Atmospheric correction of these images was completed using the empirical line method, and three-point (black, gray, and white) calibration boards were used as pseudo references. Evaluation of our corrected UAV-based remote sensing data revealed that correction efficiency and root mean square errors ranged from 0.77 to 0.95 and 0.01 to 0.05, respectively. The time series maps of simulated normalized difference vegetation index (NDVI) produced using the UAV images reproduced field variations of NDVI reasonably well, both within and between the different N treatments. We concluded that the UAV-based remote sensing technology utilized in this study is potentially an easy and simple way to quantitatively obtain reliable two-dimensional remote sensing information on crop growth.
Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov
We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less
Kaku, Yoshio; Ookawara, Susumu; Miyazawa, Haruhisa; Ito, Kiyonori; Ueda, Yuichirou; Hirai, Keiji; Hoshino, Taro; Mori, Honami; Yoshida, Izumi; Morishita, Yoshiyuki; Tabei, Kaoru
2016-02-01
The following conventional calcium correction formula (Payne) is broadly applied for serum calcium estimation: corrected total calcium (TCa) (mg/dL) = TCa (mg/dL) + (4 - albumin (g/dL)); however, it is inapplicable to chronic kidney disease (CKD) patients. A total of 2503 venous samples were collected from 942 all-stage CKD patients, and levels of TCa (mg/dL), ionized calcium ([iCa(2+) ] mmol/L), phosphate (mg/dL), albumin (g/dL), and pH, and other clinical parameters were measured. We assumed corrected TCa (the gold standard) to be equal to eight times the iCa(2+) value (measured corrected TCa). Then, we performed stepwise multiple linear regression analysis by using the clinical parameters and derived a simple formula for corrected TCa approximation. The following formula was devised from multiple linear regression analysis: Approximated corrected TCa (mg/dL) = TCa + 0.25 × (4 - albumin) + 4 × (7.4 - p H) + 0.1 × (6 - phosphate) + 0.3. Receiver operating characteristic curves analysis illustrated that area under the curve of approximated corrected TCa for detection of measured corrected TCa ≥ 8.4 mg/dL and ≤ 10.4 mg/dL were 0.994 and 0.919, respectively. The intraclass correlation coefficient demonstrated superior agreement using this new formula compared to other formulas (new formula: 0.826, Payne: 0.537, Jain: 0.312, Portale: 0.582, Ferrari: 0.362). In CKD patients, TCa correction should include not only albumin but also pH and phosphate. The approximated corrected TCa from this formula demonstrates superior agreement with the measured corrected TCa in comparison to other formulas. © 2016 International Society for Apheresis, Japanese Society for Apheresis, and Japanese Society for Dialysis Therapy.
Optical aberration correction for simple lenses via sparse representation
NASA Astrophysics Data System (ADS)
Cui, Jinlin; Huang, Wei
2018-04-01
Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.
[Photometric determination of captopril using label-free silver nanoparticles].
Li, Rui; Yan, Hong-Tao
2013-04-01
A simple, rapid and sensitive colorimetric method for the determination of captopril is presented in the present paper. It is based on the fact that captopril can induce the aggregation of AgNPs, thereby resulting in their yellow-to-red color change and the absorbance decrease at lambda395 nm. The mechanism of the aggregation effect was discussed in detail. Under the optimized conditions, the linear range of determination of captopril was 1-35 microg x mL(-1) with correction coefficient 0.998 4. The detection limit of the method for captopril was 0.7 microg x mL(-1). The method has been applied to the determination of captopril in tablets with satisfactory result.
Estimation of surface temperature in remote pollution measurement experiments
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.
NASA Astrophysics Data System (ADS)
Al-Bagawi, A. H.; Ahmad, W.; Saigl, Z. M.; Alwael, H.; Al-Harbi, E. A.; El-Shahawi, M. S.
2017-12-01
The most common problems in spectrophotometric determination of various complex species originate from the background spectral interference. Thus, the present study aimed to overcome the spectral matrix interference for the precise analysis and speciation of mercury(II) in water by dual-wavelength β-correction spectrophotometry using 4-(2-thiazolylazo) resorcinol (TAR) as chromogenic reagent. The principle was based on measuring the correct absorbance for the formed complex of mercury(II) ions with TAR reagent at 547 nm (lambda max). Under optimized conditions, a linear dynamic range of 0.1-2.0 μg mL- 1 with correlation coefficient (R2) of 0.997 were obtained with lower limits of detection (LOD) of 0.024 μg mL- 1 and limit of quantification (LOQ) of 0.081 μg mL- 1. The values of RSD and relative error (RE) obtained for β-correction method and single wavelength spectrophotometry were 1.3, 1.32% and 4.7, 5.9%, respectively. The method was validated in tap and sea water in terms of the data obtained from inductively coupled plasma-optical emission spectrometry (ICP-OES) using student's t and F tests. The developed methodology satisfactorily overcomes the spectral interference in trace determination and speciation of mercury(II) ions in water.
A novel method for fabrication of continuous-relief optical elements
NASA Astrophysics Data System (ADS)
Guo, Xiaowei; Du, Jinglei; Chen, Mingyong; Ma, Yanqin; Zhu, Jianhua; Peng, Qinjun; Guo, Yongkang; Du, Chunlei
2005-08-01
A novel method for the fabrication of continuous micro-optical components is presented in this paper. It employs a computer controlled spatial-light-modulator (SLM) as a switchable projection mask and silver-halide sensitized gelatin (SHSG) as recording material. By etching SHSG with enzyme solution, the micro-optical components with relief modulation can be generated through special processing procedures. The principles of digital SLM-based lithography and enzyme etching SHSG are discussed in detail, and microlens arrays, micro axicon-lens arrays and gratings with good profile were achieved. This method is simple, cheap and the aberration in processing procedures can be in-situ corrected in the step of designing mask, so it is a practical method to fabricate continuous profile for low-volume production.
Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies
NASA Astrophysics Data System (ADS)
Chen, Kewei; Reiman, E. M.; Lawson, M.; Yun, Lang-sheng; Bandy, D.; Palant, A.
1996-12-01
While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control (baseline) scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0-60 s after radiotracer administration, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20-80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the application of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted the authors to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging.
Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.
Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J
1996-07-01
We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.
Free-free opacity in dense plasmas with an average atom model
Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick; ...
2017-02-28
A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.
Free-free opacity in dense plasmas with an average atom model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick
A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.
Intraoperative Assessment of Tricuspid Valve Function After Conservative Repair
Revuelta, J.M.; Gomez-Duran, C.; Garcia-Rinaldi, R.; Gallagher, M.W.
1982-01-01
It is desirable to repair coexistent tricuspid valve pathology at the time of mitral valve corrections. Conservative tricuspid repair may consist of commissurotomy, annuloplasty, or both. It is important that the repair be appropriate or tricuspid valve replacement may be necessary. A simple reproducible method of intraoperative testing for tricuspid valve insufficiency has been developed and used in 25 patients. Fifteen patients have been recatheterized, and the correlation between the intraoperative and postoperative findings has been consistent. PMID:15226931
A simple method for computing the relativistic Compton scattering kernel for radiative transfer
NASA Technical Reports Server (NTRS)
Prasad, M. K.; Kershaw, D. S.; Beason, J. D.
1986-01-01
Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.
Dupuytren disease: on our way to a cure?
Degreef, Ilse; De Smet, Luc
2013-06-01
Despite its high prevalence, the clinical presentation and severity of Dupuytren disease is extremely variable. The disease features a broad spectrum of symptoms, from simple nodules without the slightest clinical impact towards an extremely disabling form requiring multiple surgical procedures, sometimes even partial hand amputations. Recurrence after surgery is considered a failure for both patient and surgeon, but its definition is vague. The term 'recontracture' was coined by a patient and reflects the disappointment of recurrent disease. Wether or not a treatment option will insure a definite result, may depend more on the severity of the disease, which is patient specific, than on the treatment method itself. If a patient presents with Dupuytren disease, one should not merely evaluate his hands. Different clinical and personal history features may uncover a severe fibrosis diathesis and both correct information to the patient and an individualized treatment plan are needed. In the near future, a simple genetic test may help to identify patients at risk. Similar to the evolving knowledge and treatment modalities seen in rheumatoid arthritis, treatment of Dupuytren disease is likely to advance in the direction of disease control with pharmacotherapy and single shot minimal invasive enzymatic fasciotomy with collagenase to correct established contractures.
Relativistic Corrections to the Bohr Model of the Atom
ERIC Educational Resources Information Center
Kraft, David W.
1974-01-01
Presents a simple means for extending the Bohr model to include relativistic corrections using a derivation similar to that for the non-relativistic case, except that the relativistic expressions for mass and kinetic energy are employed. (Author/GS)
Differential Kinematics Of Contemporary Industrial Robots
NASA Astrophysics Data System (ADS)
Szkodny, T.
2014-08-01
The paper presents a simple method of avoiding singular configurations of contemporary industrial robot manipulators of such renowned companies as ABB, Fanuc, Mitsubishi, Adept, Kawasaki, COMAU and KUKA. To determine the singular configurations of these manipulators a global form of description of the end-effector kinematics was prepared, relative to the other links. On the basis of this description , the formula for the Jacobian was defined in the end-effector coordinates. Next, a closed form of the determinant of the Jacobian was derived. From the formula, singular configurations, where the determinant's value equals zero, were determined. Additionally, geometric interpretations of these configurations were given and they were illustrated. For the exemplary manipulator, small corrections of joint variables preventing the reduction of the Jacobian order were suggested. An analysis of positional errors, caused by these corrections, was presented
Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory
NASA Astrophysics Data System (ADS)
Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.
2017-03-01
We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.
Dalaudier, F; Kan, V; Gurvich, A S
2001-02-20
We describe refractive and chromatic effects, both regular and random, that occur during star occultations by the Earth's atmosphere. The scintillation that results from random density fluctuations, as well as the consequences of regular chromatic refraction, is qualitatively described. The resultant chromatic scintillation will produce random features on the Global Ozone Monitoring by Occultation of Stars (GOMOS) spectrometer, with an amplitude comparable with that of some of the real absorbing features that result from atmospheric constituents. A correction method that is based on the use of fast photometer signals is described, and its efficiency is discussed. We give a qualitative (although accurate) description of the phenomena, including numerical values when needed. Geometrical optics and the phase-screen approximation are used to keep the description simple.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.
2002-01-01
The KB3D algorithm is a pairwise conflict detection and resolution (CD&R) algorithm. It detects and generates trajectory vectoring for an aircraft which has been predicted to be in an airspace minima violation within a given look-ahead time. It has been proven, using mechanized theorem proving techniques, that for a pair of aircraft, KB3D produces at least one vectoring solution and that all solutions produced are correct. Although solutions produced by the algorithm are mathematically correct, they might not be physically executable by an aircraft or might not solve multiple aircraft conflicts. This paper describes a simple solution selection method which assesses all solutions generated by KB3D and determines the solution to be executed. The solution selection method and KB3D are evaluated using a simulation in which N aircraft fly in a free-flight environment and each aircraft in the simulation uses KB3D to maintain separation. Specifically, the solution selection method filters KB3D solutions which are procedurally undesirable or physically not executable and uses a predetermined criteria for selection.
Robust mislabel logistic regression without modeling mislabel probabilities.
Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun
2018-03-01
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Anggraini, N.
2017-02-01
This research aims to reduce the destructive behavior such as throwing the learning materials on autism student by using correctional “NO!” approach in CANDA educational institution Surakarta. This research uses Single Subject Research (SSR) method with A-B design, it is baseline and intervention. Subject of this research is one autism student of CANDA educational institution named G.A.P. Data were collected through recording in direct observation in the form of recording events at the time of implementation baseline and intervention. Data were analyzed by simple descriptive statistical analysis and is displayed in graphical form. Based on the result of data analysis, it could be concluded that destructive behavior such as throwing the learning material on autism student was significantly reduced after given an intervention. Based on the research results, using correctional “NO!” approach can be used by teacher or therapist to reduce the destructive behavior on autism student.
`Relativistic' corrections to the mass of a plucked guitar string
NASA Astrophysics Data System (ADS)
Kolodrubetz, Michael; Polkovnikov, Anatoli
Quantum systems respond non-adiabaticity when parameters controlling them are ramped at a finite rate. If the parameters themselves are dynamical - for instance the position of a box that defines the boundary of a quantum field - the feedback of these excitations gives rise to effective Newtonian equations of motion for the parameter. For the age old problem of photons in a box, this correction gives rise to a mass proportional to the energy of the photons. We show that a similar correction arises for a classical guitar string plucked with energy E; moving clamps at the ends of the string requires inertial mass m = 2 E /cs2 , where cs is the speed of sound. This quasi-relativistic effect should be observable in freshman physics level experiments. We then comment on how these simple methods have been readily extended to treat problems such as ramps and quenches of strongly-interacting superconductors and dynamical trapping near a quantum critical point.
Level repulsion and band sorting in phononic crystals
NASA Astrophysics Data System (ADS)
Lu, Yan; Srivastava, Ankit
2018-02-01
In this paper we consider the problem of avoided crossings (level repulsion) in phononic crystals and suggest a computationally efficient strategy to distinguish them from normal cross points. This process is essential for the correct sorting of the phononic bands and, subsequently, for the accurate determination of mode continuation, group velocities, and emergent properties which depend on them such as thermal conductivity. Through explicit phononic calculations using generalized Rayleigh quotient, we identify exact locations of exceptional points in the complex wavenumber domain which results in level repulsion in the real domain. We show that in the vicinity of the exceptional point the relevant phononic eigenvalue surfaces resemble the surfaces of a 2 by 2 parameter-dependent matrix. Along a closed loop encircling the exceptional point we show that the phononic eigenvalues are exchanged, just as they are for the 2 by 2 matrix case. However, the behavior of the associated eigenvectors is shown to be more complex in the phononic case. Along a closed loop around an exceptional point, we show that the eigenvectors can flip signs multiple times unlike a 2 by 2 matrix where the flip of sign occurs only once. Finally, we exploit these eigenvector sign flips around exceptional points to propose a simple and efficient method of distinguishing them from normal crosses and of correctly sorting the band-structure. Our proposed method is roughly an order-of-magnitude faster than the zoom-in method and correctly identifies > 96% of the cases considered. Both its speed and accuracy can be further improved and we suggest some ways of achieving this. Our method is general and, as such, would be directly applicable to other eigenvalue problems where the eigenspectrum needs to be correctly sorted.
NASA Astrophysics Data System (ADS)
Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis
2010-01-01
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J
2017-09-01
The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi 2 test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); P<0.0001. The difference was greater for simple (36/36 (100%) versus 17/36 (47%); P<0.0001) than complex fractures (79/102 (77%) versus 54/102 (53%); P=0.0004). Mean segmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III: prospective case-control study of a diagnostic procedure. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Li, Shu-Shi; Huang, Cui-Ying; Hao, Jiao-Jiao; Wang, Chang-Sheng
2014-03-05
In this article, a polarizable dipole-dipole interaction model is established to estimate the equilibrium hydrogen bond distances and the interaction energies for hydrogen-bonded complexes containing peptide amides and nucleic acid bases. We regard the chemical bonds N-H, C=O, and C-H as bond dipoles. The magnitude of the bond dipole moment varies according to its environment. We apply this polarizable dipole-dipole interaction model to a series of hydrogen-bonded complexes containing the N-H···O=C and C-H···O=C hydrogen bonds, such as simple amide-amide dimers, base-base dimers, peptide-base dimers, and β-sheet models. We find that a simple two-term function, only containing the permanent dipole-dipole interactions and the van der Waals interactions, can produce the equilibrium hydrogen bond distances compared favorably with those produced by the MP2/6-31G(d) method, whereas the high-quality counterpoise-corrected (CP-corrected) MP2/aug-cc-pVTZ interaction energies for the hydrogen-bonded complexes can be well-reproduced by a four-term function which involves the permanent dipole-dipole interactions, the van der Waals interactions, the polarization contributions, and a corrected term. Based on the calculation results obtained from this polarizable dipole-dipole interaction model, the natures of the hydrogen bonding interactions in these hydrogen-bonded complexes are further discussed. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Michoud, V.; Hansen, R. F.; Locoge, N.; Stevens, P. S.; Dusanter, S.
2015-04-01
The Hydroxyl radical (OH) is an important oxidant in the daytime troposphere that controls the lifetime of most trace gases, whose oxidation leads to the formation of harmful secondary pollutants such as ozone (O3) and Secondary Organic Aerosols (SOA). In spite of the importance of OH, uncertainties remain concerning its atmospheric budget and integrated measurements of the total sink of OH can help reducing these uncertainties. In this context, several methods have been developed to measure the first-order loss rate of ambient OH, called total OH reactivity. Among these techniques, the Comparative Reactivity Method (CRM) is promising and has already been widely used in the field and in atmospheric simulation chambers. This technique relies on monitoring competitive OH reactions between a reference molecule (pyrrole) and compounds present in ambient air inside a sampling reactor. However, artefacts and interferences exist for this method and a thorough characterization of the CRM technique is needed. In this study, we present a detailed characterization of a CRM instrument, assessing the corrections that need to be applied on ambient measurements. The main corrections are, in the order of their integration in the data processing: (1) a correction for a change in relative humidity between zero air and ambient air, (2) a correction for the formation of spurious OH when artificially produced HO2 react with NO in the sampling reactor, and (3) a correction for a deviation from pseudo first-order kinetics. The dependences of these artefacts to various measurable parameters, such as the pyrrole-to-OH ratio or the bimolecular reaction rate constants of ambient trace gases with OH are also studied. From these dependences, parameterizations are proposed to correct the OH reactivity measurements from the abovementioned artefacts. A comparison of experimental and simulation results is then discussed. The simulations were performed using a 0-D box model including either (1) a simple chemical mechanism, taking into account the inorganic chemistry from IUPAC 2001 and a simple organic chemistry scheme including only a generic RO2 compounds for all oxidized organic trace gases; and (2) a more exhaustive chemical mechanism, based on the Master Chemical Mechanism (MCM), including the chemistry of the different trace gases used during laboratory experiments. Both mechanisms take into account self- and cross-reactions of radical species. The simulations using these mechanisms allow reproducing the magnitude of the corrections needed to account for NO interferences and a deviation from pseudo first-order kinetics, as well as their dependence on the Pyrrole-to-OH ratio and on bimolecular reaction rate constants of trace gases. The reasonable agreement found between laboratory experiments and model simulations gives confidence in the parameterizations proposed to correct the Total OH reactivity measured by CRM. However, it must be noted that the parameterizations presented in this paper are suitable for the CRM instrument used during the laboratory characterization and may be not appropriate for other CRM instruments, even if similar behaviours should be observed. It is therefore recommended that each group characterizes its own instrument following the recommendations given in this study. Finally, the assessment of the limit of detection and total uncertainties is discussed and an example of field deployment of this CRM instrument is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulz, T.; Remmele, T.; Korytov, M.
2014-01-21
Based on the evaluation of lattice parameter maps in aberration corrected high resolution transmission electron microscopy images, we propose a simple method that allows quantifying the composition and disorder of a semiconductor alloy at the unit cell scale with high accuracy. This is realized by considering, next to the out-of-plane, also the in-plane lattice parameter component allowing to separate the chemical composition from the strain field. Considering only the out-of-plane lattice parameter component not only yields large deviations from the true local alloy content but also carries the risk of identifying false ordering phenomena like formations of chains or platelets.more » Our method is demonstrated on image simulations of relaxed supercells, as well as on experimental images of an In{sub 0.20}Ga{sub 0.80}N quantum well. Principally, our approach is applicable to all epitaxially strained compounds in the form of quantum wells, free standing islands, quantum dots, or wires.« less
Correction of stain variations in nuclear refractive index of clinical histology specimens
Uttam, Shikhar; Bista, Rajan K.; Hartman, Douglas J.; Brand, Randall E.; Liu, Yang
2011-01-01
For any technique to be adopted into a clinical setting, it is imperative that it seamlessly integrates with well-established clinical diagnostic workflow. We recently developed an optical microscopy technique—spatial-domain low-coherence quantitative phase microscopy (SL-QPM) that can extract the refractive index of the cell nucleus from the standard histology specimens on glass slides prepared via standard clinical protocols. This technique has shown great potential in detecting cancer with a better sensitivity than conventional pathology. A major hurdle in the clinical translation of this technique is the intrinsic variation among staining agents used in histology specimens, which limits the accuracy of refractive index measurements of clinical samples. In this paper, we present a simple and easily generalizable method to remove the effect of variations in staining levels on nuclear refractive index obtained with SL-QPM. We illustrate the efficacy of our correction method by applying it to variously stained histology samples from animal model and clinical specimens. PMID:22112118
ARES v2: new features and improved performance
NASA Astrophysics Data System (ADS)
Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.
2015-05-01
Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).
Hack, Erwin; Gundu, Phanindra Narayan; Rastogi, Pramod
2005-05-10
An innovative technique for reducing speckle noise and improving the intensity profile of the speckle correlation fringes is presented. The method is based on reducing the range of the modulation intensity values of the speckle interference pattern. After the fringe pattern is corrected adaptively at each pixel, a simple morphological filtering of the fringes is sufficient to obtain smoothed fringes. The concept is presented both analytically and by simulation by using computer-generated speckle patterns. The experimental verification is performed by using an amplitude-only spatial light modulator (SLM) in a conventional electronic speckle pattern interferometry setup. The optical arrangement for tuning a commercially available LCD array for amplitude-only behavior is described. The method of feedback to the LCD SLM to modulate the intensity of the reference beam in order to reduce the modulation intensity values is explained, and the resulting fringe pattern and increase in the signal-to-noise ratio are discussed.
Empirical entropic contributions in computational docking: evaluation in APS reductase complexes.
Chang, Max W; Belew, Richard K; Carroll, Kate S; Olson, Arthur J; Goodsell, David S
2008-08-01
The results from reiterated docking experiments may be used to evaluate an empirical vibrational entropy of binding in ligand-protein complexes. We have tested several methods for evaluating the vibrational contribution to binding of 22 nucleotide analogues to the enzyme APS reductase. These include two cluster size methods that measure the probability of finding a particular conformation, a method that estimates the extent of the local energetic well by looking at the scatter of conformations within clustered results, and an RMSD-based method that uses the overall scatter and clustering of all conformations. We have also directly characterized the local energy landscape by randomly sampling around docked conformations. The simple cluster size method shows the best performance, improving the identification of correct conformations in multiple docking experiments. 2008 Wiley Periodicals, Inc.
Blazar, P E; Floyd, E W; Earp, B E
2016-07-01
Controversy exists regarding intra-operative treatment of residual proximal interphalangeal joint contractures after Dupuytren's fasciectomy. We test the hypothesis that a simple release of the digital flexor sheath can correct residual fixed flexion contracture after subtotal fasciectomy. We prospectively enrolled 19 patients (22 digits) with Dupuytren's contracture of the proximal interphalangeal joint. The average pre-operative extension deficit of the proximal interphalangeal joints was 58° (range 30-90). The flexion contracture of the joint was corrected to an average of 28° after fasciectomy. In most digits (20 of 21), subsequent incision of the flexor sheath further corrected the contracture by an average of 23°, resulting in correction to an average flexion contracture of 4.7° (range 0-40). Our results support that contracture of the tendon sheath is a contributor to Dupuytren's contracture of the joint and that sheath release is a simple, low morbidity addition to correct Dupuytren's contractures of the proximal interphalangeal joint. Additional release of the proximal interphalangeal joint after fasciectomy, after release of the flexor sheath, is not necessary in many patients. IV (Case Series, Therapeutic). © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1995-01-01
Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature, researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. This paper presents a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.
Determination of copper in tap water using solid-phase spectrophotometry
NASA Technical Reports Server (NTRS)
Hill, Carol M.; Street, Kenneth W.; Philipp, Warren H.; Tanner, Stephen P.
1994-01-01
A new application of ion exchange films is presented. The films are used in a simple analytical method of directly determining low concentrations of Cu(2+) in aqueous solutions, in particular, drinking water. The basis for this new test method is the color and absorption intensity of the ion when adsorbed onto the film. The film takes on the characteristic color of the adsorbed cation, which is concentrated on the film by many orders of magnitude. The linear relationship between absorbance (corrected for variations in film thickness) and solution concentration makes the determinations possible. These determinations agree well with flame atomic absorption determinations.
Management of prominent ears: personal approach.
Pérez-Macias, José Manuel
2008-03-01
Various methods for correcting prominent ears have been reported. Although anterior cartilage antihelix abrasion combined with posterior retention sutures is a conventional procedure, it does not include anterior conchal cartilage abrasion and thus allows easier reduction of the condromastoid angle. A simple and effective technique is described that involves using a rasp to score the whole anterior surface of the auricular cartilage, including the concha, in combination with Mustarde-type conchal-antihelical and conchal-mastoid retention sutures. This method was applied to 342 patients (675 ears) over 23 years, who were followed up for periods varying from 18 to 24 months. Good results were obtained for all patients with minimal complications.
Unthermal Charged Massive Hawking Radiation from a Reissner-Nordström Black Hole
NASA Astrophysics Data System (ADS)
Zhou, Shiwei; Liu, Wenbiao
2008-03-01
Using Damour-Ruffini’s method, the massive charged particles’ Hawking radiation from a Reissner-Nordström black hole is investigated. When the back-reaction of particles’ energy and charge to spacetime is considered, we get the unthermal spectrum. It is possible that the information will get out from the black hole with the corrected spectrum. It can be used to explain the information loss paradox, and the underlying unitary theory will be satisfied. The same conclusion as the works finished before can be drawn. However, our work is different from them, and the method is more simple and explicit.
Xu, Y.; Xia, J.; Miller, R.D.
2006-01-01
Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Swanson, Robert S; Crandall, Stewart M
1948-01-01
A limited number of lifting-surface-theory solutions for wings with chordwise loadings resulting from angle of attack, parabolic-ac camber, and flap deflection are now available. These solutions were studied with the purpose of determining methods of extrapolating the results in such a way that they could be used to determine lifting-surface-theory values of the aspect-ratio corrections to the lift and hinge-moment parameters for both angle-of-attack and flap-deflection-type loading that could be used to predict the characteristics of horizontal tail surfaces from section data with sufficient accuracy for engineering purposes. Such a method was devised for horizontal tail surfaces with full-span elevators. In spite of the fact that the theory involved is rather complex, the method is simple to apply and may be applied without any knowledge of lifting-surface theory. A comparison of experimental finite-span and section value and of the estimated values of the lift and hinge-moment parameters for three horizontal tail surfaces was made to provide an experimental verification of the method suggested. (author)
Gynecomastia: the horizontal ellipse method for its correction.
Gheita, Alaa
2008-09-01
Gynecomastia is an extremely disturbing deformity affecting males, especially when it occurs in young subjects. Such subjects generally have no hormonal anomalies and thus either liposuction or surgical intervention, depending on the type and consistency of the breast, is required for treatment. If there is slight hypertrophy alone with no ptosis, then subcutaneous mastectomy is usually sufficient. However, when hypertrophy and/or ptosis are present, then corrective surgery on the skin and breast is mandatory to obtain a good cosmetic result. Most of the procedures suggested for reduction of the male breast are usually derived from reduction mammaplasty methods used for females. They have some disadvantages, mainly the multiple scars, which remain apparent in males, unusual shape, and the lack of symmetry with regard to the size of both breasts and/or the nipple position. The author presents a new, simple method that has proven superior to any previous method described so far. It consists of a horizontal excision ellipse of the breast's redundant skin and deep excess tissue and a superior pedicle flap carrying the areola-nipple complex to its new site on the chest wall. The method described yields excellent shape, symmetry, and minimal scars. A new method for treating gynecomastis is described in detail, its early and late operative results are shown, and its advantages are discussed.
Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki
2015-09-10
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments.
Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki
2015-01-01
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments. PMID:26378532
A simple way to plan implant positioning: the "S-technique".
Piano, Sergio
2011-01-01
This study presents a technique for improving implant placements. As is widely known, a correct positioning is essential in restoration-driven implants, as well as in tilted implants in order to obtain satisfactory final functional and esthetic results. To this end, some authors have emphasized the importance of using a diagnostic and/or surgical guide to plan the exact implant position. In practice, one of the clinical problems faced is how to check the accuracy of the template prior to initiating the surgical phase. A simple method called the "S-Technique" is proposed in order to evaluate and to change, if necessary, the projected position of the implants by way of metal rods as radiopaque markers. This device is easy to produce and is cost-saving to the clinician and, therefore, to the patient. Furthermore, in specific patients, this method could also decrease the need for computerized tomography scans and/or radiographs, thus reducing health risks for the patient.
Dai, Meiling; Yang, Fujun; He, Xiaoyuan
2012-04-20
A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.
A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars
NASA Astrophysics Data System (ADS)
Zou, Hong; Ye, Yu Guang; Wang, Jin Song; Nielsen, Erling; Cui, Jun; Wang, Xiao Dong
2016-04-01
A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars is introduced in this study. The neutral densities at 130 km can be derived from the ionospheric and atmospheric measurements of the Radio Science experiment on board Mars Global Surveyor (MGS). The derived neutral densities cover a large longitude range in northern high latitudes from summer to late autumn during 3 Martian years, which fills the gap of the previous observations for the upper atmosphere of Mars. The simulations of the Laboratoire de Météorologie Dynamique Mars global circulation model can be corrected with a simple linear equation to fit the neutral densities derived from the first MGS/RS (Radio Science) data sets (EDS1). The corrected simulations with the same correction parameters as for EDS1 match the derived neutral densities from two other MGS/RS data sets (EDS2 and EDS3) very well. The derived neutral density from EDS3 shows a dust storm effect, which is in accord with the Mars Express (MEX) Spectroscopy for Investigation of Characteristics of the Atmosphere of Mars measurement. The neutral density derived from the MGS/RS measurements can be used to validate the Martian atmospheric models. The method presented in this study can be applied to other radio occultation measurements, such as the result of the Radio Science experiment on board MEX.
Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation
Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-01-01
Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182
Deductive derivation and turing-computerization of semiparametric efficient estimation.
Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-12-01
Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. © 2015, The International Biometric Society.
Shoemaker, W. Barclay; Sumner, D.M.
2006-01-01
Corrections can be used to estimate actual wetland evapotranspiration (AET) from potential evapotranspiration (PET) as a means to define the hydrology of wetland areas. Many alternate parameterizations for correction coefficients for three PET equations are presented, covering a wide range of possible data-availability scenarios. At nine sites in the wetland Everglades of south Florida, USA, the relatively complex PET Penman equation was corrected to daily total AET with smaller standard errors than the PET simple and Priestley-Taylor equations. The simpler equations, however, required less data (and thus less funding for instrumentation), with the possibility of being corrected to AET with slightly larger, comparable, or even smaller standard errors. Air temperature generally corrected PET simple most effectively to wetland AET, while wetland stage and humidity generally corrected PET Priestley-Taylor and Penman most effectively to wetland AET. Stage was identified for PET Priestley-Taylor and Penman as the data type with the most correction ability at sites that are dry part of each year or dry part of some years. Finally, although surface water generally was readily available at each monitoring site, AET was not occurring at potential rates, as conceptually expected under well-watered conditions. Apparently, factors other than water availability, such as atmospheric and stomata resistances to vapor transport, also were limiting the PET rate. ?? 2006, The Society of Wetland Scientists.
NASA Astrophysics Data System (ADS)
Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.
2018-01-01
We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.
Corrected formula for the polarization of second harmonic plasma emission
NASA Technical Reports Server (NTRS)
Melrose, D. B.; Dulk, G. A.; Gary, D. E.
1980-01-01
Corrections for the theory of polarization of second harmonic plasma emission are proposed. The nontransversality of the magnetoionic waves was not taken into account correctly and is here corrected. The corrected and uncorrected results are compared for two simple cases of parallel and isotropic distributions of Langmuir waves. It is found that whereas with the uncorrected formula plausible values of the coronal magnetic fields were obtained from the observed polarization of the second harmonic, the present results imply fields which are stronger by a factor of three to four.
Assimilation of SMOS Retrievals in the Land Information System
NASA Technical Reports Server (NTRS)
Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.
2016-01-01
The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm(sub 3 cm(sub -3). These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve.
Application of distance correction to ChemCam laser-induced breakdown spectroscopy measurements
Mezzacappa, A.; Melikechi, N.; Cousin, A.; ...
2016-04-04
Laser-induced breakdown spectroscopy (LIBS) provides chemical information from atomic, ionic, and molecular emissions from which geochemical composition can be deciphered. Analysis of LIBS spectra in cases where targets are observed at different distances, as is the case for the ChemCam instrument on the Mars rover Curiosity, which performs analyses at distances between 2 and 7.4 m is not a simple task. Previously, we showed that spectral distance correction based on a proxy spectroscopic standard created from first-shot dust observations on Mars targets ameliorates the distance bias in multivariate-based elemental-composition predictions of laboratory data. In this work, we correct an expandedmore » set of neutral and ionic spectral emissions for distance bias in the ChemCam data set. By using and testing different selection criteria to generate multiple proxy standards, we find a correction that minimizes the difference in spectral intensity measured at two different distances and increases spectral reproducibility. When the quantitative performance of distance correction is assessed, there is improvement for SiO 2, Al 2O 3, CaO, FeOT, Na 2O, K 2O, that is, for most of the major rock forming elements, and for the total major-element weight percent predicted. But, for MgO the method does not provide improvements while for TiO 2, it yields inconsistent results. Additionally, we observed that many emission lines do not behave consistently with distance, evidenced from laboratory analogue measurements and ChemCam data. This limits the effectiveness of the method.« less
Assimilation of SMOS Retrievals in the Land Information System
Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.
2018-01-01
The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm3 cm−3. These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve. PMID:29367795
Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation
NASA Astrophysics Data System (ADS)
Ouwerkerk, Ronald; Bottomley, Paul A.
2001-02-01
Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where metabolite concentrations change, accurate saturation corrections are possible without much loss in SNR.
Experimental Estimating Deflection of a Simple Beam Bridge Model Using Grating Eddy Current Sensors
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring. PMID:23112583
Experimental estimating deflection of a simple beam bridge model using grating eddy current sensors.
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring.
Endoscopic-assisted osteotomies for the treatment of craniosynostosis.
Hinojosa, J; Esparza, J; Muñoz, M J
2007-12-01
The development of multidisciplinar units for Craniofacial Surgery has led to better postoperative results and a considerable decrease in morbidity in the treatment of complex craniofacial patients. Standard correction of craniosynostosis involves calvarial remodeling, often considerable blood losses that need to be replaced and lengthy hospital stay. The use of minimally invasive techniques for the correction of some of these malformations are widespread and allows the surgeon to minimize the incidence of complications by means of a decreased surgical time, blood salvage, and shortening of postoperative hospitalization in comparison to conventional craniofacial techniques. Simple and milder craniosynostosis are best approached by endoscopy-assisted osteotomies and render the best results. Extended procedures other than simple suturectomies have been described for more severe patients. Different osteotomies resembling standard fronto-orbital have been developed for the correction, and the use of postoperative cranial orthesis may improve the final cosmetic appearance. Thus, endoscopic-assisted procedures differ from the simple strategy of single suture resection that rendered insufficient results in the past, and different approaches can be tailored to solve these cases in patients in the setting of a case-to-case bases.
Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods
NASA Astrophysics Data System (ADS)
Lemoine, Grady
Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems,more » the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy.« less
Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K
2017-01-01
Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, M.; Al-Dayeh, L.; Patel, P.
It is well known that even small movements of the head can lead to artifacts in fMRI. Corrections for these movements are usually made by a registration algorithm which accounts for translational and rotational motion of the head under a rigid body assumption. The brain, however, is not entirely rigid and images are prone to local deformations due to CSF motion, susceptibility effects, local changes in blood flow and inhomogeneities in the magnetic and gradient fields. Since nonrigid body motion is not adequately corrected by approaches relying on simple rotational and translational corrections, we have investigated a general approach wheremore » an n{sup th} order polynomial is used to map all images onto a common reference image. The coefficients of the polynomial transformation were determined through minimization of the ratio of the variance to the mean of each pixel. Simulation studies were conducted to validate the technique. Results of experimental studies using polynomial transformation for 2D and 3D registration show lower variance to mean ratio compared to simple rotational and translational corrections.« less
Blending Velocities In Task Space In Computing Robot Motions
NASA Technical Reports Server (NTRS)
Volpe, Richard A.
1995-01-01
Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.
NASA Astrophysics Data System (ADS)
Anderson, R.; Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Quezada de Luna, M.; Rieben, R.; Tomov, V.
2017-04-01
In this work we present a FCT-like Maximum-Principle Preserving (MPP) method to solve the transport equation. We use high-order polynomial spaces; in particular, we consider up to 5th order spaces in two and three dimensions and 23rd order spaces in one dimension. The method combines the concepts of positive basis functions for discontinuous Galerkin finite element spatial discretization, locally defined solution bounds, element-based flux correction, and non-linear local mass redistribution. We consider a simple 1D problem with non-smooth initial data to explain and understand the behavior of different parts of the method. Convergence tests in space indicate that high-order accuracy is achieved. Numerical results from several benchmarks in two and three dimensions are also reported.
Neonatal Ear Molding: Timing and Technique.
Anstadt, Erin Elizabeth; Johns, Dana Nicole; Kwok, Alvin Chi-Ming; Siddiqi, Faizi; Gociman, Barbu
2016-03-01
The incidence of auricular deformities is believed to be ∼11.5 per 10,000 births, excluding children with microtia. Although not life-threatening, auricular deformities can cause undue distress for patients and their families. Although surgical procedures have traditionally been used to reconstruct congenital auricular deformities, ear molding has been gaining acceptance as an efficacious, noninvasive alternative for the treatment of newborns with ear deformations. We present the successful correction of bilateral Stahl's ear deformity in a newborn through a straightforward, nonsurgical method implemented on the first day of life. The aim of this report is to make pediatric practitioners aware of an effective and simple molding technique appropriate for correction of congenital auricular anomalies. In addition, it stresses the importance of very early initiation of ear cartilage molding for achieving the desired outcome. Copyright © 2016 by the American Academy of Pediatrics.
The use of phosphatidylcholine for correction of localized fat deposits.
Rittes, Patrícia Guedes
2003-01-01
Subjects with localized fat deposits commonly receive suction lipectomy as a cosmetic procedure. A new office procedure for correction of those superficial fat deposits was applied in 50 patients by injection of phosphatidylcholine. The method itself consists of using a 3OG1/2 insulin needle to inject about 5 ml (250 mg/5 ml) of phosphatidylcholine into the fat, distributing it evenly in an 80 cm2 area. Pre- and posttreatment photographs were taken for technical planning and analysis of the results over the long term. A clear improvement occurred in all, with a marked reduction of the fat deposits without recurrence over a 2-year follow-up period and no weight gain. The injection of phosphatidylcholine into the fat deposits is a simple office procedure that can sometimes postpone or even replace surgery and liposuction.
Analytical approach to chromatic correction in the final focus system of circular colliders
Cai, Yunhai
2016-11-28
Here, a conventional final focus system in particle accelerators is systematically analyzed. We find simple relations between the parameters of two focus modules in the final telescope. Using the relations, we derive the chromatic Courant-Snyder parameters for the telescope. The parameters are scaled approximately according to (L*/βmore » $$*\\atop{y}$$)δ, where L* is the distance from the interaction point to the first quadrupole, β$$*\\atop{y}$$ the vertical beta function at the interaction point, and δ the relative momentum deviation. Most importantly, we show how to compensate its chromaticity order by order in δ by a traditional correction module flanked by an asymmetric pair of harmonic multipoles. The method enables a circular Higgs collider with 2% momentum aperture and illuminates a path forward to 4% in the future.« less
Advances in lens implant technology
Kampik, Anselm; Dexl, Alois K.; Zimmermann, Nicole; Glasser, Adrian; Baumeister, Martin; Kohnen, Thomas
2013-01-01
Cataract surgery is one of the oldest and the most frequent outpatient clinic operations in medicine performed worldwide. The clouded human crystalline lens is replaced by an artificial intraocular lens implanted into the capsular bag. During the last six decades, cataract surgery has undergone rapid development from a traumatic, manual surgical procedure with implantation of a simple lens to a minimally invasive intervention increasingly assisted by high technology and a broad variety of implants customized for each patient’s individual requirements. This review discusses the major advances in this field and focuses on the main challenge remaining – the treatment of presbyopia. The demand for correction of presbyopia is increasing, reflecting the global growth of the ageing population. Pearls and pitfalls of currently applied methods to correct presbyopia and different approaches under investigation, both in lens implant technology and in surgical technology, are discussed. PMID:23413369
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
Liu, C.; Liu, J.; Yao, Y. X.; ...
2017-01-16
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, J.; Yao, Y. X.
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
Library based x-ray scatter correction for dedicated cone beam breast CT
Shi, Linxi; Karellas, Andrew; Zhu, Lei
2016-01-01
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870
Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.; ...
2017-11-06
We present that the mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can yield erroneous results if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flightmore » MS. Here, in this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases for highly saturated species and dynamic range increased by 1–2 orders of magnitude for peptides in a blood serum sample.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.
The mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can easily cause problems if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flight MS. In thismore » method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases with highly saturated species and dynamic range increased by 1-2 orders of magnitude for peptides in a blood serum sample.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.
We present that the mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can yield erroneous results if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flightmore » MS. Here, in this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases for highly saturated species and dynamic range increased by 1–2 orders of magnitude for peptides in a blood serum sample.« less
NASA Astrophysics Data System (ADS)
Feng, Wenqiang; Guo, Zhenlin; Lowengrub, John S.; Wise, Steven M.
2018-01-01
We present a mass-conservative full approximation storage (FAS) multigrid solver for cell-centered finite difference methods on block-structured, locally cartesian grids. The algorithm is essentially a standard adaptive FAS (AFAS) scheme, but with a simple modification that comes in the form of a mass-conservative correction to the coarse-level force. This correction is facilitated by the creation of a zombie variable, analogous to a ghost variable, but defined on the coarse grid and lying under the fine grid refinement patch. We show that a number of different types of fine-level ghost cell interpolation strategies could be used in our framework, including low-order linear interpolation. In our approach, the smoother, prolongation, and restriction operations need never be aware of the mass conservation conditions at the coarse-fine interface. To maintain global mass conservation, we need only modify the usual FAS algorithm by correcting the coarse-level force function at points adjacent to the coarse-fine interface. We demonstrate through simulations that the solver converges geometrically, at a rate that is h-independent, and we show the generality of the solver, applying it to several nonlinear, time-dependent, and multi-dimensional problems. In several tests, we show that second-order asymptotic (h → 0) convergence is observed for the discretizations, provided that (1) at least linear interpolation of the ghost variables is employed, and (2) the mass conservation corrections are applied to the coarse-level force term.
Karbalaie, Abdolamir; Abtahi, Farhad; Fatemi, Alimohammad; Etehadtavakol, Mahnaz; Emrani, Zahra; Erlandsson, Björn-Erik
2017-09-01
Nailfold capillaroscopy is a practical method for identifying and obtaining morphological changes in capillaries which might reveal relevant information about diseases and health. Capillaroscopy is harmless, and seems simple and repeatable. However, there is lack of established guidelines and instructions for acquisition as well as the interpretation of the obtained images; which might lead to various ambiguities. In addition, assessment and interpretation of the acquired images are very subjective. In an attempt to overcome some of these problems, in this study a new modified technique for assessment of nailfold capillary density is introduced. The new method is named elliptic broken line (EBL) which is an extension of the two previously known methods by defining clear criteria for finding the apex of capillaries in different scenarios by using a fitted elliptic. A graphical user interface (GUI) is developed for pre-processing, manual assessment of capillary apexes and automatic correction of selected apexes based on 90° rule. Intra- and inter-observer reliability of EBL and corrected EBL is evaluated in this study. Four independent observers familiar with capillaroscopy performed the assessment for 200 nailfold videocapillaroscopy images, form healthy subject and systemic lupus erythematosus patients, in two different sessions. The results show elevation from moderate (ICC=0.691) and good (ICC=0.753) agreements to good (ICC=0.750) and good (ICC=0.801) for intra- and inter-observer reliability after automatic correction of EBL. This clearly shows the potential of this method to improve the reliability and repeatability of assessment which motivates us for further development of automatic tool for EBL method. Copyright © 2017 Elsevier Inc. All rights reserved.
"Hook"-calibration of GeneChip-microarrays: theory and algorithm.
Binder, Hans; Preibisch, Stephan
2008-08-29
: The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. : We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. : The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.
Daniell method for power spectral density estimation in atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labuda, Aleksander
An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less
NASA Astrophysics Data System (ADS)
Shi, Yongli; Wu, Zhong; Zhi, Kangyi; Xiong, Jun
2018-03-01
In order to realize reliable commutation of brushless DC motors (BLDCMs), a simple approach is proposed to detect and correct signal faults of Hall position sensors in this paper. First, the time instant of the next jumping edge for Hall signals is predicted by using prior information of pulse intervals in the last electrical period. Considering the possible errors between the predicted instant and the real one, a confidence interval is set by using the predicted value and a suitable tolerance for the next pulse edge. According to the relationship between the real pulse edge and the confidence interval, Hall signals can be judged and the signal faults can be corrected. Experimental results of a BLDCM at steady speed demonstrate the effectiveness of the approach.
A novel method for determining sex in late term gestational mice based on the external genitalia
Murdaugh, Laura B.; Mendoza-Romero, Haley N.; Fish, Eric W.
2018-01-01
In many experiments using fetal mice, it is necessary to determine the sex of the individual fetus. However, other than genotyping for sex-specific genes, there is no convenient, reliable method of sexing mice between gestational day (GD) 16.5 and GD 18.0. We designed a rapid, relatively simple visual method to determine the sex of mouse fetuses in the GD 16.5—GD 18.0 range that can be performed as part of a routine morphological assessment. By examining the genitalia for the presence or absence of key features, raters with minimal experience with the method were able to correctly identify the sex of embryos with 99% accuracy, while raters with no experience were 95% accurate. The critical genital features include: the presence or absence of urethral seam or proximal urethral meatus; the shape of the genitalia, and the presence or absence of an area related to the urethral plate. By comparing these morphological features of the external genitalia, we show a simple, accurate, and fast way to determine the sex of late stage mouse fetuses. Integrating this method into regular morphological assessments will facilitate the determination of sex differences in fetuses between GD 16.5 and GD 18.0. PMID:29617407
Boore, D.M.; Stephens, C.D.; Joyner, W.B.
2002-01-01
Residual displacements for large earthquakes can sometimes be determined from recordings on modern digital instruments, but baseline offsets of unknown origin make it difficult in many cases to do so. To recover the residual displacement, we suggest tailoring a correction scheme by studying the character of the velocity obtained by integration of zeroth-order-corrected acceleration and then seeing if the residual displacements are stable when the various parameters in the particular correction scheme are varied. For many seismological and engineering purposes, however, the residual displacement are of lesser importance than ground motions at periods less than about 20 sec. These ground motions are often recoverable with simple baseline correction and low-cut filtering. In this largely empirical study, we illustrate the consequences of various correction schemes, drawing primarily from digital recordings of the 1999 Hector Mine, California, earthquake. We show that with simple processing the displacement waveforms for this event are very similar for stations separated by as much as 20 km. We also show that a strong pulse on the transverse component was radiated from the Hector Mine earthquake and propagated with little distortion to distances exceeding 170 km; this pulse leads to large response spectral amplitudes around 10 sec.
Describing the epidemiology of rheumatic diseases: methodological aspects.
Guillemin, Francis
2012-03-01
Producing descriptive epidemiology data is essential to understand the burden of rheumatic diseases (prevalence) and their dynamic in the population (incidence). No matter how simple such indicators may look, the correct collection of data and the appropriate interpretation of the results face several challenges: distinguishing indicators, facing the costs of obtaining data, using appropriate definition, identifying optimal sources of data, choosing among many survey methods, dealing with estimates precision, and standardizing results. This study describes the underlying methodological difficulties to be overcome so as to make descriptive indicators reliable and interpretable.
Rapid identification of group JK and other corynebacteria with the Minitek system.
Slifkin, M; Gil, G M; Engwall, C
1986-01-01
Forty primary clinical isolates and 50 stock cultures of corynebacteria and coryneform bacteria were tested with the Minitek system (BBL Microbiology Systems, Cockeysville, Md.). The Minitek correctly identified all of these organisms, including JK group isolates, within 12 to 18 h of incubation. The method does not require serum supplements for testing carbohydrate utilization by the bacteria. The Minitek system is an extremely simple and rapid way to identify the JK group, as well as many other corynebacteria, by established identification schemata for these bacteria. PMID:3091632
Water-level measurements in Dauphin Island, Alabama, from the 2013 Hurricane Season
Dickhudt, Patrick J.; Sherwood, Christopher R.; DeWitt, Nancy T.
2015-01-01
This report describes the instrumentation, field measurements, and processing methods used by the U.S. Geological Survey to measure atmospheric pressure, water levels, and waves on Dauphin Island, Alabama, in 2013 at part of the Barrier Island Evolution Research project. Simple, inexpensive pressure sensors mounted in shallow wells were buried in the beach and left throughout the hurricane season. Additionally, an atmospheric pressure sensor was mounted on the porch of a private residence to provide a local atmospheric pressure measurement for correcting the submerged pressure records.
Extended Aperture Photometry of K2 RR Lyrae stars
NASA Astrophysics Data System (ADS)
Plachy, Emese; Klagyivik, Péter; Molnár, László; Sódor, Ádám; Szabó, Róbert
2017-10-01
We present the method of the Extended Aperture Photometry (EAP) that we applied on K2 RR Lyrae stars. Our aim is to minimize the instrumental variations of attitude control maneuvers by using apertures that cover the positional changes in the field of view thus contain the stars during the whole observation. We present example light curves that we compared to the light curves from the K2 Systematics Correction (K2SC) pipeline applied on the automated Single Aperture Photometry (SAP) and on the Pre-search Data Conditioning Simple Aperture Photometry (PDCSAP) data.
Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources
NASA Astrophysics Data System (ADS)
Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.
2011-05-01
The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.
Novel, simple and fast automated synthesis of 18F-choline in a single Synthera module
NASA Astrophysics Data System (ADS)
Litman, Y.; Pace, P.; Silva, L.; Hormigo, C.; Caro, R.; Gutierrez, H.; Bastianello, M.; Casale, G.
2012-12-01
The aim of this work is to develop a method to produce 18F-Fluorocholine in a single Synthera module with high yield, quality and reproducibility. We give special importance to the details of the drying and distillation procedures. After 5 syntheses we report a decay corrected yield of (27 ± 2) % (mean ± S.D.). The radiochemical purity was > 95%, and the other quality control parameters were within the specifications. Product 18F-fluorocholine was administrated to 17 humans with no observed side-effects.
PyParse: a semiautomated system for scoring spoken recall data.
Solway, Alec; Geller, Aaron S; Sederberg, Per B; Kahana, Michael J
2010-02-01
Studies of human memory often generate data on the sequence and timing of recalled items, but scoring such data using conventional methods is difficult or impossible. We describe a Python-based semiautomated system that greatly simplifies this task. This software, called PyParse, can easily be used in conjunction with many common experiment authoring systems. Scored data is output in a simple ASCII format and can be accessed with the programming language of choice, allowing for the identification of features such as correct responses, prior-list intrusions, extra-list intrusions, and repetitions.
Zhang, Le; Ren, Zhong-Yuan; Wu, Ya-Dong; Li, Nan
2018-01-30
In situ strontium (Sr) isotope analysis of geological samples by laser ablation multiple collector inductively coupled plasma mass spectrometry (LA-MC-ICP-MS) provides useful information about magma mixing, crustal contamination and crystal residence time. Without chemical separation, during Sr isotope analysis with laser ablation, many kinds of interference ions (such as Rb + and Kr + ) are on the Sr isotope spectrum. Most previous in situ Sr isotope studies only focused on Sr-enriched minerals (e.g. plagioclase, calcite). Here we established a simple method for in situ Sr isotope analysis of basaltic glass with Rb/Sr ratio less than 0.14 by LA-MC-ICP-MS. Seven Faraday cups, on a Neptune Plus MC-ICP-MS instrument, were used to receive the signals on m/z 82, 83, 84, 85, 86, 87 and 88 simultaneously for the Sr isotope analysis of basaltic glass. The isobaric interference of 87 Rb was corrected by the peak stripping method. The instrumental mass fractionation of 87 Sr/ 86 Sr was corrected to 86 Sr/ 88 Sr = 0.1194 with an exponential law. Finally, the residual analytical biases of 87 Sr/ 86 Sr were corrected with a relationship between the deviation of 87 Sr/ 86 Sr from the reference values and the measured 87 Rb/ 86 Sr. The validity of the protocol present here was demonstrated by measuring the Sr isotopes of four basaltic glasses, a plagioclase crystal and a piece of modern coral. The measured 87 Sr/ 86 Sr ratios of all these samples agree within 100 ppm with the reference values. In addition, the Sr isotopes of olivine-hosted melt inclusions from the Emeishan large igneous province (LIP) were measured to show the application of our method to real geological samples. A simple but accurate approach for in situ Sr isotope measurement by LA-MC-ICP-MS has been established, which should greatly facilitate the wider application of in situ Sr isotope geochemistry, especially to volcanic rock studies. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Korany, Mohamed A.; Mahgoub, Hoda; Haggag, Rim S.; Ragab, Marwa A. A.; Elmallah, Osama A.
2018-06-01
A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5 mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8 μg mL-1) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH 6.8 and dissimilarity in the other 2 pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH 1.2) and dissolution study in 3 pH media (HCl of pH 1.2, acetate buffer of pH 4.5 and phosphate buffer of pH 6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method.
Korany, Mohamed A; Mahgoub, Hoda; Haggag, Rim S; Ragab, Marwa A A; Elmallah, Osama A
2018-06-15
A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8μgmL -1 ) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH6.8 and dissimilarity in the other 2pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH1.2) and dissolution study in 3pH media (HCl of pH1.2, acetate buffer of pH4.5 and phosphate buffer of pH6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo
2018-06-01
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Oxygen isotope exchange with quartz during pyrolysis of silver sulfate and silver nitrate.
Schauer, Andrew J; Kunasek, Shelley A; Sofen, Eric D; Erbland, Joseph; Savarino, Joel; Johnson, Ben W; Amos, Helen M; Shaheen, Robina; Abaunza, Mariana; Jackson, Terri L; Thiemens, Mark H; Alexander, Becky
2012-09-30
Triple oxygen isotopes of sulfate and nitrate are useful metrics for the chemistry of their formation. Existing measurement methods, however, do not account for oxygen atom exchange with quartz during the thermal decomposition of sulfate. We present evidence for oxygen atom exchange, a simple modification to prevent exchange, and a correction for previous measurements. Silver sulfates and silver nitrates with excess (17)O were thermally decomposed in quartz and gold (for sulfate) and quartz and silver (for nitrate) sample containers to O(2) and byproducts in a modified Temperature Conversion/Elemental Analyzer (TC/EA). Helium carries O(2) through purification for isotope-ratio analysis of the three isotopes of oxygen in a Finnigan MAT253 isotope ratio mass spectrometer. The Δ(17)O results show clear oxygen atom exchange from non-zero (17)O-excess reference materials to zero (17)O-excess quartz cup sample containers. Quartz sample containers lower the Δ(17)O values of designer sulfate reference materials and USGS35 nitrate by 15% relative to gold or silver sample containers for quantities of 2-10 µmol O(2). Previous Δ(17)O measurements of sulfate that rely on pyrolysis in a quartz cup have been affected by oxygen exchange. These previous results can be corrected using a simple linear equation (Δ(17)O(gold) = Δ(17)O(quartz) * 1.14 + 0.06). Future pyrolysis of silver sulfate should be conducted in gold capsules or corrected to data obtained from gold capsules to avoid obtaining oxygen isotope exchange-affected data. Copyright © 2012 John Wiley & Sons, Ltd.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo
2018-06-05
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Novak, A.; Simon, L.; Lotton, P.
2018-04-01
Mechanical transducers, such as shakers, loudspeakers and compression drivers that are used as excitation devices to excite acoustical or mechanical nonlinear systems under test are imperfect. Due to their nonlinear behaviour, unwanted contributions appear at their output besides the wanted part of the signal. Since these devices are used to study nonlinear systems, it should be required to measure properly the systems under test by overcoming the influence of the nonlinear excitation device. In this paper, a simple method that corrects distorted output signal of the excitation device by means of predistortion of its input signal is presented. A periodic signal is applied to the input of the excitation device and, from analysing the output signal of the device, the input signal is modified in such a way that the undesirable spectral components in the output of the excitation device are cancelled out after few iterations of real-time processing. The experimental results provided on an electrodynamic shaker show that the spectral purity of the generated acceleration output approaches 100 dB after few iterations (1 s). This output signal, applied to the system under test, is thus cleaned from the undesirable components produced by the excitation device; this is an important condition to ensure a correct measurement of the nonlinear system under test.
Validity of the t-plot method to assess microporosity in hierarchical micro/mesoporous materials.
Galarneau, Anne; Villemot, François; Rodriguez, Jeremy; Fajula, François; Coasne, Benoit
2014-11-11
The t-plot method is a well-known technique which allows determining the micro- and/or mesoporous volumes and the specific surface area of a sample by comparison with a reference adsorption isotherm of a nonporous material having the same surface chemistry. In this paper, the validity of the t-plot method is discussed in the case of hierarchical porous materials exhibiting both micro- and mesoporosities. Different hierarchical zeolites with MCM-41 type ordered mesoporosity are prepared using pseudomorphic transformation. For comparison, we also consider simple mechanical mixtures of microporous and mesoporous materials. We first show an intrinsic failure of the t-plot method; this method does not describe the fact that, for a given surface chemistry and pressure, the thickness of the film adsorbed in micropores or small mesopores (< 10σ, σ being the diameter of the adsorbate) increases with decreasing the pore size (curvature effect). We further show that such an effect, which arises from the fact that the surface area and, hence, the free energy of the curved gas/liquid interface decreases with increasing the film thickness, is captured using the simple thermodynamical model by Derjaguin. The effect of such a drawback on the ability of the t-plot method to estimate the micro- and mesoporous volumes of hierarchical samples is then discussed, and an abacus is given to correct the underestimated microporous volume by the t-plot method.
Web-Based Versus Conventional Training for Medical Students on Infant Gross Motor Screening.
Pusponegoro, Hardiono D; Soebadi, Amanda; Surya, Raymond
2015-12-01
Early detection of developmental abnormalities is important for early intervention. A simple screening method is needed for use by general practitioners, as is an effective and efficient training method. This study aims to evaluate the effectiveness, acceptability, and usability of Web-based training for medical students on a simple gross motor screening method in infants. Fifth-year medical students at University of Indonesia in Jakarta were randomized into two groups. A Web-based training group received online video modules, discussions, and assessments (at www.schoology.com ). A conventional training group received a 1-day live training using the same module. Both groups completed identical pre- and posttests and the User Satisfaction Questionnaire (USQ). The Web-based group also completed the System Usability Scale (SUS). The module was based on a gross motor screening method used in the World Health Organization Multicentre Growth Reference Study. There were 39 and 32 subjects in the Web-based and conventional groups, respectively. Mean pretest versus posttest scores (correct answers out of 20) were 9.05 versus 16.95 (p=0.0001) in the Web-based group and 9.31 versus 16.88 (p=0.0001) in the conventional group. Mean difference between pre- and posttest scores did not differ significantly between the Web-based and conventional groups (mean [standard deviation], 7.56 [3.252] versus 7.90 [5.170]; p=0.741]. Both training methods were acceptable based on USQ scores. Based on SUS scores, the Web-based training had good usability. Web-based training is an effective, efficient, and acceptable training method for medical students on simple infant gross motor screening and is as effective as conventional training.
Cohen, Bruce E; Nicholson, Christopher W
2007-05-01
The bunionette, or tailor's bunion, is a lateral prominence of the fifth metatarsal head. Most commonly, bunionettes are the result of a widened 4-5 intermetatarsal angle with associated varus of the metatarsophalangeal joint. When symptomatic, these deformities often respond to nonsurgical treatment methods, such as wider shoes and padding techniques. When these methods are unsuccessful, surgical treatment is based on preoperative radiographs and associated lesions, such as hyperkeratoses. In rare situations, a simple lateral eminence resection is appropriate; however, the risk of recurrence or overresection is high with this technique. Patients with a lateral bow to the fifth metatarsal are treated with a distal chevron-type osteotomy. A widened 4-5 intermetatarsal angle often requires a diaphyseal osteotomy for correction.
NASA Technical Reports Server (NTRS)
Flat, A.; Milnes, A. G.
1978-01-01
In scanning electron microscope (SEM) injection measurements of minority carrier diffusion lengths some uncertainties of interpretation exist when the response current is nonlinear with distance. This is significant in epitaxial layers where the layer thickness is not large in relation to the diffusion length, and where there are large surface recombination velocities on the incident and contact surfaces. An image method of analysis is presented for such specimens. A method of using the results to correct the observed response in a simple convenient way is presented. The technique is illustrated with reference to measurements in epitaxial layers of GaAs. Average beam penetration depth may also be estimated from the curve shape.
Unthermal charged massive Hawking radiation from a Reissner-Nordström-de Sitter black hole
NASA Astrophysics Data System (ADS)
Khayrul Hasan, M.
2015-05-01
We investigate the massive charged particles' Hawking radiation from a Reissner-Nordström-de Sitter (RNdS) black hole by Damour-Ruffini's method. We get the unthermal spectrum when the back-reaction of particles' energy and charge to spacetime is considered. The information will get out from the black hole with the corrected spectrum. The radiation is not exactly thermal and because the derivation obeys conservation laws, the non thermal Hawking radiation can carry information from the black hole. In our work the method is more simple and explicit and it can be used to explain the black hole information loss paradox, and the process satisfies underlying unitary theory.
Gartzke, J; Jäger, H; Vins, I
1991-01-01
A simple, fast and reliable liquid chromatographic method for the determination of theophylline in serum and capillary blood after a solid phase extraction is described for therapeutic drug monitoring. The employment of capillary blood permits the determination of an individual drug profile and other pharmacokinetic studies in neonates and infants. There were no differences in venous- and capillary-blood levels but these values compared poorly with those in serum. An adjustment of the results by correction of the different volumes of serum and blood by haematocrit was unsuccessful. Differences in the binding of theophylline to erythrocytes could be an explanation for the differences in serum at blood levels of theophylline.
Emissivity correction for interpreting thermal radiation from a terrestrial surface
NASA Technical Reports Server (NTRS)
Sutherland, R. A.; Bartholic, J. F.; Gerber, J. F.
1979-01-01
A general method of accounting for emissivity in making temperature determinations of graybody surfaces from radiometric data is presented. The method differs from previous treatments in that a simple blackbody calibration and graphical approach is used rather than numerical integrations which require detailed knowledge of an instrument's spectral characteristics. Also, errors caused by approximating instrumental response with the Stephan-Boltzman law rather than with an appropriately weighted Planck integral are examined. In the 8-14 micron wavelength interval, it is shown that errors are at most on the order of 3 C for the extremes of the earth's temperature and emissivity. For more practical limits, however, errors are less than 0.5 C.
An Interactive GIS Procedure for Building and Basement Corrections in Urban Microgravity Surveys
NASA Astrophysics Data System (ADS)
Chasseriau, P.; Olivier, R.
2007-12-01
Construction of a new underground railway in Lausanne, a highly-urbanized city in Switzerland, was an opportunity to test the feasibility and reliability of microgravity surveys in urban environments. The goal of our microgravity survey was to determine the depth-to-bedrock along the project corridor. Available drilling information allowed us verify the density model obtained. The geophysical results also provided spatially exhaustive subsurface information that could not be obtained with drilling methods alone. Gravimetry is one of the rare geophysical methods that can be used in noisy urban environments. An inevitable constraint of this method is terrain correction. It is not easy to obtain a simple and accurate digital elevation model (DEM) of an urban environment considering that buildings and basements are not included. However, these structures significantly influence gravity measurements. We calculate, with software that we have developed, the influence of buildings and basements in order to correct our gravity data. Our procedure permits the integration of gravity measurements, cadastral information (building typology and geometry) and basement geometry in an Access database that allows interactive determination of the Bouguer anomaly. A geographic information system (GIS) is used to extract building geometries based on cadastral information and to correct the influence of each building using a simplified architectural style. Basement voids are then introduced in the final DEM using building outlines given by cadastral maps. The depths and altitudes of the basements are measured by visiting them, and then linking the results to a regional topographic map. All of these corrections can be calculated before the gravity acquisition has begun in order to optimize the design of the survey. The surveys are executed late at night so as to minimize the effects of traffic noise. 160 gravity measurements were carried out before and after digging of the underground tunnel. The difference between gravimetric values of both surveys permitted validation of our modelling code.
Correcting length-frequency distributions for imperfect detection
Breton, André R.; Hawkins, John A.; Winkelman, Dana L.
2013-01-01
Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.
Fu, Hai-Yan; Guo, Jun-Wei; Yu, Yong-Jie; Li, He-Dong; Cui, Hua-Peng; Liu, Ping-Ping; Wang, Bing; Wang, Sheng; Lu, Peng
2016-06-24
Peak detection is a critical step in chromatographic data analysis. In the present work, we developed a multi-scale Gaussian smoothing-based strategy for accurate peak extraction. The strategy consisted of three stages: background drift correction, peak detection, and peak filtration. Background drift correction was implemented using a moving window strategy. The new peak detection method is a variant of the system used by the well-known MassSpecWavelet, i.e., chromatographic peaks are found at local maximum values under various smoothing window scales. Therefore, peaks can be detected through the ridge lines of maximum values under these window scales, and signals that are monotonously increased/decreased around the peak position could be treated as part of the peak. Instrumental noise was estimated after peak elimination, and a peak filtration strategy was performed to remove peaks with signal-to-noise ratios smaller than 3. The performance of our method was evaluated using two complex datasets. These datasets include essential oil samples for quality control obtained from gas chromatography and tobacco plant samples for metabolic profiling analysis obtained from gas chromatography coupled with mass spectrometry. Results confirmed the reasonability of the developed method. Copyright © 2016 Elsevier B.V. All rights reserved.
Quantum networks in divergence-free circuit QED
NASA Astrophysics Data System (ADS)
Parra-Rodriguez, A.; Rico, E.; Solano, E.; Egusquiza, I. L.
2018-04-01
Superconducting circuits are one of the leading quantum platforms for quantum technologies. With growing system complexity, it is of crucial importance to develop scalable circuit models that contain the minimum information required to predict the behaviour of the physical system. Based on microwave engineering methods, divergent and non-divergent Hamiltonian models in circuit quantum electrodynamics have been proposed to explain the dynamics of superconducting quantum networks coupled to infinite-dimensional systems, such as transmission lines and general impedance environments. Here, we study systematically common linear coupling configurations between networks and infinite-dimensional systems. The main result is that the simple Lagrangian models for these configurations present an intrinsic natural length that provides a natural ultraviolet cutoff. This length is due to the unavoidable dressing of the environment modes by the network. In this manner, the coupling parameters between their components correctly manifest their natural decoupling at high frequencies. Furthermore, we show the requirements to correctly separate infinite-dimensional coupled systems in local bases. We also compare our analytical results with other analytical and approximate methods available in the literature. Finally, we propose several applications of these general methods to analogue quantum simulation of multi-spin-boson models in non-perturbative coupling regimes.
Smartphone-Based Hearing Screening in Noisy Environments
Na, Youngmin; Joo, Hyo Sung; Yang, Hyejin; Kang, Soojin; Hong, Sung Hwa; Woo, Jihwan
2014-01-01
It is important and recommended to detect hearing loss as soon as possible. If it is found early, proper treatment may help improve hearing and reduce the negative consequences of hearing loss. In this study, we developed smartphone-based hearing screening methods that can ubiquitously test hearing. However, environmental noise generally results in the loss of ear sensitivity, which causes a hearing threshold shift (HTS). To overcome this limitation in the hearing screening location, we developed a correction algorithm to reduce the HTS effect. A built-in microphone and headphone were calibrated to provide the standard units of measure. The HTSs in the presence of either white or babble noise were systematically investigated to determine the mean HTS as a function of noise level. When the hearing screening application runs, the smartphone automatically measures the environmental noise and provides the HTS value to correct the hearing threshold. A comparison to pure tone audiometry shows that this hearing screening method in the presence of noise could closely estimate the hearing threshold. We expect that the proposed ubiquitous hearing test method could be used as a simple hearing screening tool and could alert the user if they suffer from hearing loss. PMID:24926692
Detecting benzoyl peroxide in wheat flour by line-scan macro-scale Raman chemical imaging
NASA Astrophysics Data System (ADS)
Qin, Jianwei; Kim, Moon S.; Chao, Kuanglin; Gonzalez, Maria; Cho, Byoung-Kwan
2017-05-01
Excessive use of benzoyl peroxide (BPO, a bleaching agent) in wheat flour can destroy flour nutrients and cause diseases to consumers. A macro-scale Raman chemical imaging method was developed for direct detection of BPO mixed in the wheat flour. A 785 nm line laser was used in a line-scan Hyperspectral Raman imaging system. Raman images were collected from wheat flour mixed with BPO at eight concentrations (w/w) from 50 to 6,400 ppm. A sample holder (150×100×2 mm3) was used to present a thin layer (2 mm thick) of the powdered sample for image acquisition. A baseline correction method was used to correct the fluctuating fluorescence signals from the wheat flour. To isolate BPO particles from the flour background, a simple thresholding method was applied to the single-band fluorescence-free images at a unique Raman peak wavenumber (i.e., 1001 cm-1) preselected for the BPO detection. Chemical images were created to detect and map the BPO particles. Limit of detection for the BPO was estimated in the order of 50 ppm, which is on the same level with regulatory standards.
A new method for incoherent combining of far-field laser beams based on multiple faculae recognition
NASA Astrophysics Data System (ADS)
Ye, Demao; Li, Sichao; Yan, Zhihui; Zhang, Zenan; Liu, Yuan
2018-03-01
Compared to coherent beam combining, incoherent beam combining can complete the output of high power laser beam with high efficiency, simple structure, low cost and high thermal damage resistance, and it is easy to realize in engineering. Higher target power is achieved by incoherent beam combination which using technology of multi-channel optical path correction. However, each channel forms a spot in the far field respectively, which cannot form higher laser power density with low overlap ratio of faculae. In order to improve the combat effectiveness of the system, it is necessary to overlap different faculae that improve the target energy density. Hence, a novel method for incoherent combining of far-field laser beams is present. The method compromises piezoelectric ceramic technology and evaluation algorithm of faculae coincidence degree which based on high precision multi-channel optical path correction. The results show that the faculae recognition algorithm is low-latency(less than 10ms), which can meet the needs of practical engineering. Furthermore, the real time focusing ability of far field faculae is improved which was beneficial to the engineering of high-energy laser weapon or other laser jamming systems.
Correcting the SIMPLE Model of Free Recall
ERIC Educational Resources Information Center
Lee, Michael D.; Pooley, James P.
2013-01-01
The scale-invariant memory, perception, and learning (SIMPLE) model developed by Brown, Neath, and Chater (2007) formalizes the theoretical idea that scale invariance is an important organizing principle across numerous cognitive domains and has made an influential contribution to the literature dealing with modeling human memory. In the context…
Compensating for Effects of Humidity on Electronic Noses
NASA Technical Reports Server (NTRS)
Homer, Margie; Ryan, Margaret A.; Manatt, Kenneth; Zhou, Hanying; Manfreda, Allison
2004-01-01
A method of compensating for the effects of humidity on the readouts of electronic noses has been devised and tested. The method is especially appropriate for use in environments in which humidity is not or cannot be controlled for example, in the vicinity of a chemical spill, which can be accompanied by large local changes in humidity. Heretofore, it has been common practice to treat water vapor as merely another analyte, the concentration of which is determined, along with that of the other analytes, in a computational process based on deconvolution. This practice works well, but leaves room for improvement: changes in humidity can give rise to large changes in electronic-nose responses. If corrections for humidity are not made, the large humidity-induced responses may swamp smaller responses associated with low concentrations of analytes. The present method offers an improvement. The underlying concept is simple: One augments an electronic nose with a separate humidity and a separate temperature sensor. The outputs of the humidity and temperature sensors are used to generate values that are subtracted from the readings of the other sensors in an electronic nose to correct for the temperature-dependent contributions of humidity to those readings. Hence, in principle, what remains after corrections are the contributions of the analytes only. Laboratory experiments on a first-generation electronic nose have shown that this method is effective and improves the success rate of identification of analyte/ water mixtures. Work on a second-generation device was in progress at the time of reporting the information for this article.
The free energy of a reaction coordinate at multiple constraints: a concise formulation
NASA Astrophysics Data System (ADS)
Schlitter, Jürgen; Klähn, Marco
The free energy as a function of the reaction coordinate (rc) is the key quantity for the computation of equilibrium and kinetic quantities. When it is considered as the potential of mean force, the problem is the calculation of the mean force for given values of the rc. We reinvestigate the PMCF (potential of mean constraint force) method which applies a constraint to the rc to compute the mean force as the mean negative constraint force and a metric tensor correction. The latter allows for the constraint imposed to the rc and possible artefacts due to multiple constraints of other variables which for practical reasons are often used in numerical simulations. Two main results are obtained that are of theoretical and practical interest. First, the correction term is given a very concise and simple shape which facilitates its interpretation and evaluation. Secondly, a theorem describes various rcs and possible combinations with constraints that can be used without introducing any correction to the constraint force. The results facilitate the computation of free energy by molecular dynamics simulations.
NASA Astrophysics Data System (ADS)
Maelger, J.; Reinosa, U.; Serreau, J.
2018-04-01
We extend a previous investigation [U. Reinosa et al., Phys. Rev. D 92, 025021 (2015), 10.1103/PhysRevD.92.025021] of the QCD phase diagram with heavy quarks in the context of background field methods by including the two-loop corrections to the background field effective potential. The nonperturbative dynamics in the pure-gauge sector is modeled by a phenomenological gluon mass term in the Landau-DeWitt gauge-fixed action, which results in an improved perturbative expansion. We investigate the phase diagram at nonzero temperature and (real or imaginary) chemical potential. Two-loop corrections yield an improved agreement with lattice data as compared to the leading-order results. We also compare with the results of nonperturbative continuum approaches. We further study the equation of state as well as the thermodynamic stability of the system at two-loop order. Finally, using simple thermodynamic arguments, we show that the behavior of the Polyakov loops as functions of the chemical potential complies with their interpretation in terms of quark and antiquark free energies.
Self-Tuning Adaptive-Controller Using Online Frequency Identification
NASA Technical Reports Server (NTRS)
Chiang, W. W.; Cannon, R. H., Jr.
1985-01-01
A real time adaptive controller was designed and tested successfully on a fourth order laboratory dynamic system which features very low structural damping and a noncolocated actuator sensor pair. The controller, implemented in a digital minicomputer, consists of a state estimator, a set of state feedback gains, and a frequency locked loop (FLL) for real time parameter identification. The FLL can detect the closed loop natural frequency of the system being controlled, calculate the mismatch between a plant parameter and its counterpart in the state estimator, and correct the estimator parameter in real time. The adaptation algorithm can correct the controller error and stabilize the system for more than 50% variation in the plant natural frequency, compared with a 10% stability margin in frequency variation for a fixed gain controller having the same performance at the nominal plant condition. After it has locked to the correct plant frequency, the adaptive controller works as well as the fixed gain controller does when there is no parameter mismatch. The very rapid convergence of this adaptive system is demonstrated experimentally, and can also be proven with simple root locus methods.
Research on the Application of Fast-steering Mirror in Stellar Interferometer
NASA Astrophysics Data System (ADS)
Mei, R.; Hu, Z. W.; Xu, T.; Sun, C. S.
2017-07-01
For a stellar interferometer, the fast-steering mirror (FSM) is widely utilized to correct wavefront tilt caused by atmospheric turbulence and internal instrumental vibration due to its high resolution and fast response frequency. In this study, the non-coplanar error between the FSM and actuator deflection axis introduced by manufacture, assembly, and adjustment is analyzed. Via a numerical method, the additional optical path difference (OPD) caused by above factors is studied, and its effects on tracking accuracy of stellar interferometer are also discussed. On the other hand, the starlight parallelism between the beams of two arms is one of the main factors of the loss of fringe visibility. By analyzing the influence of wavefront tilt caused by the atmospheric turbulence on fringe visibility, a simple and efficient real-time correction scheme of starlight parallelism is proposed based on a single array detector. The feasibility of this scheme is demonstrated by laboratory experiment. The results show that starlight parallelism meets the requirement of stellar interferometer in wavefront tilt preliminarily after the correction of fast-steering mirror.
Quantum Loop Expansion to High Orders, Extended Borel Summation, and Comparison with Exact Results
NASA Astrophysics Data System (ADS)
Noreen, Amna; Olaussen, Kåre
2013-07-01
We compare predictions of the quantum loop expansion to (essentially) infinite orders with (essentially) exact results in a simple quantum mechanical model. We find that there are exponentially small corrections to the loop expansion, which cannot be explained by any obvious “instanton”-type corrections. It is not the mathematical occurrence of exponential corrections but their seeming lack of any physical origin which we find surprising and puzzling.
NASA Astrophysics Data System (ADS)
Croft, Stephen; Favalli, Andrea
2017-10-01
Neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where the next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, Stephen; Favalli, Andrea
Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less
Platysma Flap with Z-Plasty for Correction of Post-Thyroidectomy Swallowing Deformity
Jeon, Min Kyeong; Kang, Seok Joo
2013-01-01
Background Recently, the number of thyroid surgery cases has been increasing; consequently, the number of patients who visit plastic surgery departments with a chief complaint of swallowing deformity has also increased. We performed a scar correction technique on post-thyroidectomy swallowing deformity via platysma flap with Z-plasty and obtained satisfactory aesthetic and functional outcomes. Methods The authors performed operations upon 18 patients who presented a definitive retraction on the swallowing mechanism as an objective sign of swallowing deformity, or throat or neck discomfort on swallowing mechanism such as sensation of throat traction as a subjective sign after thyoridectomy from January 2009 till June 2012. The scar tissue that adhered to the subcutaneous tissue layer was completely excised. A platysma flap as mobile interference was applied to remove the continuity of the scar adhesion, and additionally, Z-plasty for prevention of midline platysma banding was performed. Results The follow-up results of the 18 patients indicated that the definitive retraction on the swallowing mechanism was completely removed. Throat or neck discomfort on the swallowing mechanism such as sensation of throat traction also was alleviated in all 18 patients. When preoperative and postoperative Vancouver scar scales were compared to each other, the scale had decreased significantly after surgery (P<0.05). Conclusions Our simple surgical method involved the formation of a platysma flap with Z-plasty as mobile interference for the correction of post-thyroidectomy swallowing deformity. This method resulted in aesthetically and functionally satisfying outcomes. PMID:23898442
Croft, Stephen; Favalli, Andrea
2017-07-16
Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less
Simple wavefront correction framework for two-photon microscopy of in-vivo brain
Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.
2015-01-01
We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763
A simple second-order digital phase-locked loop.
NASA Technical Reports Server (NTRS)
Tegnelia, C. R.
1972-01-01
A simple second-order digital phase-locked loop has been designed for the Viking Orbiter 1975 command system. Excluding analog-to-digital conversion, implementation of the loop requires only an adder/subtractor, two registers, and a correctable counter with control logic. The loop considers only the polarity of phase error and corrects system clocks according to a filtered sequence of this polarity. The loop is insensitive to input gain variation, and therefore offers the advantage of stable performance over long life. Predictable performance is guaranteed by extreme reliability of acquisition, yet in the steady state the loop produces only a slight degradation with respect to analog loop performance.
Newberry Combined Gravity 2016
Kelly Rose
2016-01-22
Newberry combined gravity from Zonge Int'l, processed for the EGS stimulation project at well 55-29. Includes data from both Davenport 2006 collection and for OSU/4D EGS monitoring 2012 collection. Locations are NAD83, UTM Zone 10 North, meters. Elevation is NAVD88. Gravity in milligals. Free air and observed gravity are included, along with simple Bouguer anomaly and terrain corrected Bouguer anomaly. SBA230 means simple Bouguer anomaly computed at 2.30 g/cc. CBA230 means terrain corrected Bouguer anomaly at 2.30 g/cc. This suite of densities are included (g/cc): 2.00, 2.10, 2.20, 2.30, 2.40, 2.50, 2.67.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shumilin, V. P.; Shumilin, A. V.; Shumilin, N. V., E-mail: vladimirshumilin@yahoo.com
2015-11-15
The paper is devoted to comparison of experimental data with theoretical predictions concerning the dependence of the current of accelerated ions on the operating voltage of a Hall thruster with an anode layer. The error made in the paper published by the authors in Plasma Phys. Rep. 40, 229 (2014) occurred because of a misprint in the Encyclopedia of Low-Temperature Plasma. In the present paper, this error is corrected. It is shown that the simple model proposed in the above-mentioned paper is in qualitative and quantitative agreement with experimental results.
Nature of collective decision-making by simple yes/no decision units.
Hasegawa, Eisuke; Mizumoto, Nobuaki; Kobayashi, Kazuya; Dobata, Shigeto; Yoshimura, Jin; Watanabe, Saori; Murakami, Yuuka; Matsuura, Kenji
2017-10-31
The study of collective decision-making spans various fields such as brain and behavioural sciences, economics, management sciences, and artificial intelligence. Despite these interdisciplinary applications, little is known regarding how a group of simple 'yes/no' units, such as neurons in the brain, can select the best option among multiple options. One prerequisite for achieving such correct choices by the brain is correct evaluation of relative option quality, which enables a collective decision maker to efficiently choose the best option. Here, we applied a sensory discrimination mechanism using yes/no units with differential thresholds to a model for making a collective choice among multiple options. The performance corresponding to the correct choice was shown to be affected by various parameters. High performance can be achieved by tuning the threshold distribution with the options' quality distribution. The number of yes/no units allocated to each option and its variability profoundly affects performance. When this variability is large, a quorum decision becomes superior to a majority decision under some conditions. The general features of this collective decision-making by a group of simple yes/no units revealed in this study suggest that this mechanism may be useful in applications across various fields.
On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; LeMaster, Daniel A.
2017-05-01
We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ödén, Jakob; Zimmerman, Jens; Nowik, Patrik
2015-09-15
Purpose: The quantitative effects of assumptions made in the calculation of stopping-power ratios (SPRs) are investigated, for stoichiometric CT calibration in proton therapy. The assumptions investigated include the use of the Bethe formula without correction terms, Bragg additivity, the choice of I-value for water, and the data source for elemental I-values. Methods: The predictions of the Bethe formula for SPR (no correction terms) were validated against more sophisticated calculations using the SRIM software package for 72 human tissues. A stoichiometric calibration was then performed at our hospital. SPR was calculated for the human tissues using either the assumption of simplemore » Bragg additivity or the Seltzer-Berger rule (as used in ICRU Reports 37 and 49). In each case, the calculation was performed twice: First, by assuming the I-value of water was an experimentally based value of 78 eV (value proposed in Errata and Addenda for ICRU Report 73) and second, by recalculating the I-value theoretically. The discrepancy between predictions using ICRU elemental I-values and the commonly used tables of Janni was also investigated. Results: Errors due to neglecting the correction terms to the Bethe formula were calculated at less than 0.1% for biological tissues. Discrepancies greater than 1%, however, were estimated due to departures from simple Bragg additivity when a fixed I-value for water was imposed. When the I-value for water was calculated in a consistent manner to that for tissue, this disagreement was substantially reduced. The difference between SPR predictions when using Janni’s or ICRU tables for I-values was up to 1.6%. Experimental data used for materials of relevance to proton therapy suggest that the ICRU-derived values provide somewhat more accurate results (root-mean-square-error: 0.8% versus 1.6%). Conclusions: The conclusions from this study are that (1) the Bethe formula can be safely used for SPR calculations without correction terms; (2) simple Bragg additivity can be reasonably assumed for compound materials; (3) if simple Bragg additivity is assumed, then the I-value for water should be calculated in a consistent manner to that of the tissue of interest (rather than using an experimentally derived value); (4) the ICRU Report 37 I-values may provide a better agreement with experiment than Janni’s tables.« less
Utilizing the virus-induced blocking of apoptosis in an easy baculovirus titration method
Niarchos, Athanasios; Lagoumintzis, George; Poulas, Konstantinos
2015-01-01
Baculovirus-mediated protein expression is a robust experimental technique for producing recombinant higher-eukaryotic proteins because it combines high yields with considerable post-translational modification capabilities. In this expression system, the determination of the titer of recombinant baculovirus stocks is important to achieve the correct multiplicity of infection for effective amplification of the virus and high expression of the target protein. To overcome the drawbacks of existing titration methods (e.g., plaque assay, real-time PCR), we present a simple and reliable assay that uses the ability of baculoviruses to block apoptosis in their host cells to accurately titrate virus samples. Briefly, after incubation with serial dilutions of baculovirus samples, Sf9 cells were UV irradiated and, after apoptosis induction, they were viewed via microscopy; the presence of cluster(s) of infected cells as islets indicated blocked apoptosis. Subsequently, baculovirus titers were calculated through the determination of the 50% endpoint dilution. The method is simple, inexpensive, and does not require unique laboratory equipment, consumables or expertise; moreover, it is versatile enough to be adapted for the titration of every virus species that can block apoptosis in any culturable host cells which undergo apoptosis under specific conditions. PMID:26490731
Monitoring robot actions for error detection and recovery
NASA Technical Reports Server (NTRS)
Gini, M.; Smith, R.
1987-01-01
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.
Tooth loss caused by displaced elastic during simple preprosthetic orthodontic treatment
Dianiskova, Simona; Calzolari, Chiara; Migliorati, Marco; Silvestrini-Biavati, Armando; Isola, Gaetano; Savoldi, Fabio; Dalessandri, Domenico; Paganelli, Corrado
2016-01-01
The use of elastics to close a diastema or correct tooth malpositions can create unintended consequences if not properly controlled. The American Association of Orthodontists recently issued a consumer alert, warning of “a substantial risk for irreparable damage” from a new trend called “do-it-yourself” orthodontics, consisting of patients autonomously using elastics to correct tooth position. The elastics can work their way below the gums and around the roots of the teeth, causing damage to the periodontium and even resulting in tooth loss. The cost of implants to replace these teeth would well exceed the cost of proper orthodontic care. This damage could also occur in a dental office, when a general dentist tries to perform a simplified orthodontic correction of a minor tooth malposition. The present case report describes a case of tooth loss caused by a displaced intraoral elastic, which occurred during a simple preprosthetic orthodontic treatment. PMID:27672645
A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories
NASA Astrophysics Data System (ADS)
Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon
BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.
Gerhardt, Natalie; Birkenmeier, Markus; Schwolow, Sebastian; Rohn, Sascha; Weller, Philipp
2018-02-06
This work describes a simple approach for the untargeted profiling of volatile compounds for the authentication of the botanical origins of honey based on resolution-optimized HS-GC-IMS combined with optimized chemometric techniques, namely PCA, LDA, and kNN. A direct comparison of the PCA-LDA models between the HS-GC-IMS and 1 H NMR data demonstrated that HS-GC-IMS profiling could be used as a complementary tool to NMR-based profiling of honey samples. Whereas NMR profiling still requires comparatively precise sample preparation, pH adjustment in particular, HS-GC-IMS fingerprinting may be considered an alternative approach for a truly fully automatable, cost-efficient, and in particular highly sensitive method. It was demonstrated that all tested honey samples could be distinguished on the basis of their botanical origins. Loading plots revealed the volatile compounds responsible for the differences among the monofloral honeys. The HS-GC-IMS-based PCA-LDA model was composed of two linear functions of discrimination and 10 selected PCs that discriminated canola, acacia, and honeydew honeys with a predictive accuracy of 98.6%. Application of the LDA model to an external test set of 10 authentic honeys clearly proved the high predictive ability of the model by correctly classifying them into three variety groups with 100% correct classifications. The constructed model presents a simple and efficient method of analysis and may serve as a basis for the authentication of other food types.
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Adelman, H. M.
1984-01-01
Orbiting spacecraft such as large space antennas have to maintain a highly accurate space to operate satisfactorily. Such structures require active and passive controls to mantain an accurate shape under a variety of disturbances. Methods for the optimum placement of control actuators for correcting static deformations are described. In particular, attention is focused on the case were control locations have to be selected from a large set of available sites, so that integer programing methods are called for. The effectiveness of three heuristic techniques for obtaining a near-optimal site selection is compared. In addition, efficient reanalysis techniques for the rapid assessment of control effectiveness are presented. Two examples are used to demonstrate the methods: a simple beam structure and a 55m space-truss-parabolic antenna.
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
A new approach for modeling gravitational radiation from the inspiral of two neutron stars
NASA Astrophysics Data System (ADS)
Luke, Stephen A.
In this dissertation, a new method of applying the ADM formalism of general relativity to model the gravitational radiation emitted from the realistic inspiral of a neutron star binary is described. A description of the conformally flat condition (CFC) is summarized, and the ADM equations are solved by use of the CFC approach for a neutron star binary. The advantages and limitations of this approach are discussed, and the need for a more accurate improvement to this approach is described. To address this need, a linearized perturbation of the CFC spatial three metric is then introduced. The general relativistic hydrodynamic equations are then allowed to evolve against this basis under the assumption that the first-order corrections to the hydrodynamic variables are negligible compared to their CFC values. As a first approximation, the linear corrections to the conformal factor, lapse function, and shift vector are also assumed to be small compared to the extrinsic curvature and the three metric. A boundary matching method is then introduced as a way of computing the gravitational radiation of this relativistic system without use of the multipole expansion as employed by earlier applications of the CFC approach. It is assumed that at a location far from the source, the three metric is accurately described by a linear correction to Minkowski spacetime. The two polarizations of gravitational radiation can then be computed at that point in terms of the linearized correction to the metric. The evolution equations obtained from the linearized perturbative correction to the CFC approach and the method for recovery of the gravity wave signal are then tested by use of a three-dimensional numerical simulation. This code is used to compute the gravity wave signal emitted a pair of equal mass neutron stars in quasi-stable circular orbits at a point early in their inspiral phase. From this simple numerical analysis, the correct general trend of gravitational radiation is recovered. Comparisons with (5/2) post-Newtonian solutions show a similar gravitational waveform, although inaccuracies are still found to exist from this computation. Finally, several areas for improvement and potential future applications of this technique are discussed.
Flores, David I; Sotelo-Mundo, Rogerio R; Brizuela, Carlos A
2014-01-01
The automatic identification of catalytic residues still remains an important challenge in structural bioinformatics. Sequence-based methods are good alternatives when the query shares a high percentage of identity with a well-annotated enzyme. However, when the homology is not apparent, which occurs with many structures from the structural genome initiative, structural information should be exploited. A local structural comparison is preferred to a global structural comparison when predicting functional residues. CMASA is a recently proposed method for predicting catalytic residues based on a local structure comparison. The method achieves high accuracy and a high value for the Matthews correlation coefficient. However, point substitutions or a lack of relevant data strongly affect the performance of the method. In the present study, we propose a simple extension to the CMASA method to overcome this difficulty. Extensive computational experiments are shown as proof of concept instances, as well as for a few real cases. The results show that the extension performs well when the catalytic site contains mutated residues or when some residues are missing. The proposed modification could correctly predict the catalytic residues of a mutant thymidylate synthase, 1EVF. It also successfully predicted the catalytic residues for 3HRC despite the lack of information for a relevant side chain atom in the PDB file.
Power corrections in the N -jettiness subtraction scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Power corrections in the N -jettiness subtraction scheme
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
2017-03-30
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Simple Tidal Prism Models Revisited
NASA Astrophysics Data System (ADS)
Luketina, D.
1998-01-01
Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.
NASA Technical Reports Server (NTRS)
1979-01-01
The photos show automobile engines being tested for nitrous oxide emissions, as required by the Environmental Protection Agency (EPA), at the Research and Engineering Division of Ford Motor Company, Dearborn. Michigan. NASA technical information helped the company develop a means of calculating emissions test results. Nitrous oxide emission readings vary with relative humidity in the test facility. EPA uses a standard humidity measurement, but the agency allows manufacturers to test under different humidity conditions, then apply a correction factor to adjust the results to the EPA standard. NASA's Dryden Flight Research Center developed analytic equations which provide a simple, computer-programmable method of correcting for humidity variations. A Ford engineer read a NASA Tech Brief describing the Dryden development and requested more detailed information in the form of a technical support package, which NASA routinely supplies to industry on request. Ford's Emissions Test Laboratory now uses the Dryden equations for humidity-adjusted emissions data reported to EPA.
Su, Liping; Yan, Hong; Xing, Yongxin; Zhang, Yuhai; Zhu, Baoyi
2016-01-01
We studied 87 cases of children aged 3 to 10 with unilateral amblyopia (with types of anisometropia, strabismus, or both) who received good recovery after occlusion therapy. The proportional improvement had moderate positive correlation with amblyopic eye improvement (p < 0.05) and negative correlation with residual amblyopia (p < 0.05); the amblyopia residual had no correlation with amblyopic eye improvement (p < 0.05). In multivariate analysis, the proportion of the deficit-corrected of the <5 years group with 2 h/d occlusion therapy group displayed the best outcome (p < 0.05). The BCVA of amblyopia eye and residual amblyopia are simple and direct indicators for clinical application. The proportion of the deficit-corrected method should be graded as the proportion of change in visual acuity with respect to the absolute potential for improvement, and these optimum outcomes can provide powerful evidence for good therapeutic effect.
NASA Technical Reports Server (NTRS)
Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.
1978-01-01
Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.
NASA Astrophysics Data System (ADS)
Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.
2016-06-01
Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bereau, Tristan, E-mail: bereau@mpip-mainz.mpg.de; Lilienfeld, O. Anatole von
We estimate polarizabilities of atoms in molecules without electron density, using a Voronoi tesselation approach instead of conventional density partitioning schemes. The resulting atomic dispersion coefficients are calculated, as well as many-body dispersion effects on intermolecular potential energies. We also estimate contributions from multipole electrostatics and compare them to dispersion. We assess the performance of the resulting intermolecular interaction model from dispersion and electrostatics for more than 1300 neutral and charged, small organic molecular dimers. Applications to water clusters, the benzene crystal, the anti-cancer drug ellipticine—intercalated between two Watson-Crick DNA base pairs, as well as six macro-molecular host-guest complexes highlightmore » the potential of this method and help to identify points of future improvement. The mean absolute error made by the combination of static electrostatics with many-body dispersion reduces at larger distances, while it plateaus for two-body dispersion, in conflict with the common assumption that the simple 1/R{sup 6} correction will yield proper dissociative tails. Overall, the method achieves an accuracy well within conventional molecular force fields while exhibiting a simple parametrization protocol.« less
Dynamics of a parametrically excited simple pendulum
NASA Astrophysics Data System (ADS)
Depetri, Gabriela I.; Pereira, Felipe A. C.; Marin, Boris; Baptista, Murilo S.; Sartorelli, J. C.
2018-03-01
The dynamics of a parametric simple pendulum submitted to an arbitrary angle of excitation ϕ was investigated experimentally by simulations and analytically. Analytical calculations for the loci of saddle-node bifurcations corresponding to the creation of resonant orbits were performed by applying Melnikov's method. However, this powerful perturbative method cannot be used to predict the existence of odd resonances for a vertical excitation within first order corrections. Yet, we showed that period-3 resonances indeed exist in such a configuration. Two degenerate attractors of different phases, associated with the same loci of saddle-node bifurcations in parameter space, are reported. For tilted excitation, the degeneracy is broken due to an extra torque, which was confirmed by the calculation of two distinct loci of saddle-node bifurcations for each attractor. This behavior persists up to ϕ≈7 π/180 , and for inclinations larger than this, only one attractor is observed. Bifurcation diagrams were constructed experimentally for ϕ=π/8 to demonstrate the existence of self-excited resonances (periods smaller than three) and hidden oscillations (for periods greater than three).
Dynamics of a parametrically excited simple pendulum.
Depetri, Gabriela I; Pereira, Felipe A C; Marin, Boris; Baptista, Murilo S; Sartorelli, J C
2018-03-01
The dynamics of a parametric simple pendulum submitted to an arbitrary angle of excitation ϕ was investigated experimentally by simulations and analytically. Analytical calculations for the loci of saddle-node bifurcations corresponding to the creation of resonant orbits were performed by applying Melnikov's method. However, this powerful perturbative method cannot be used to predict the existence of odd resonances for a vertical excitation within first order corrections. Yet, we showed that period-3 resonances indeed exist in such a configuration. Two degenerate attractors of different phases, associated with the same loci of saddle-node bifurcations in parameter space, are reported. For tilted excitation, the degeneracy is broken due to an extra torque, which was confirmed by the calculation of two distinct loci of saddle-node bifurcations for each attractor. This behavior persists up to ϕ≈7π/180, and for inclinations larger than this, only one attractor is observed. Bifurcation diagrams were constructed experimentally for ϕ=π/8 to demonstrate the existence of self-excited resonances (periods smaller than three) and hidden oscillations (for periods greater than three).
Acoustic equations of state for simple lattice Boltzmann velocity sets.
Viggen, Erlend Magnus
2014-07-01
The lattice Boltzmann (LB) method typically uses an isothermal equation of state. This is not sufficient to simulate a number of acoustic phenomena where the equation of state cannot be approximated as linear and constant. However, it is possible to implement variable equations of state by altering the LB equilibrium distribution. For simple velocity sets with velocity components ξ(iα)∈(-1,0,1) for all i, these equilibria necessarily cause error terms in the momentum equation. These error terms are shown to be either correctable or negligible at the cost of further weakening the compressibility. For the D1Q3 velocity set, such an equilibrium distribution is found and shown to be unique. Its sound propagation properties are found for both forced and free waves, with some generality beyond D1Q3. Finally, this equilibrium distribution is applied to a nonlinear acoustics simulation where both mechanisms of nonlinearity are simulated with good results. This represents an improvement on previous such simulations and proves that the compressibility of the method is still sufficiently strong even for nonlinear acoustics.
Interferometric surface mapping with variable sensitivity.
Jaerisch, W; Makosch, G
1978-03-01
In the photolithographic process, presently employed for the production of integrated circuits, sets of correlated masks are used for exposing the photoresist on silicon wafers. Various sets of masks which are printed in different printing tools must be aligned correctly with respect to the structures produced on the wafer in previous process steps. Even when perfect alignment is considered, displacements and distortions of the printed wafer patterns occur. They are caused by imperfections of the printing tools or/and wafer deformations resulting from high temperature processes. Since the electrical properties of the final integrated circuits and therefore the manufacturing yield depend to a great extent on the precision at which such patterns are superimposed, simple and fast overlay measurements and flatness measurements as well are very important in IC-manufacturing. A simple optical interference method for flatness measurements will be described which can be used under manufacturing conditions. This method permits testing of surface height variations by nearly grazing light incidence by absence of a physical reference plane. It can be applied to polished surfaces and rough surfaces as well.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
Kidwell, Mallory C.; Lazarević, Ljiljana B.; Baranski, Erica; Piechowski, Sarah; Falkenberg, Lina-Sophia; Sonnleitner, Carina; Fiedler, Susann; Nosek, Brian A.
2016-01-01
Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories. PMID:27171007
NASA Astrophysics Data System (ADS)
Golmakani, M. E.; Malikan, M.; Sadraee Far, M. N.; Majidi, H. R.
2018-06-01
This paper presents a formulation based on simple first-order shear deformation theory (S-FSDT) for large deflection and buckling of orthotropic single-layered graphene sheets (SLGSs). The S-FSDT has many advantages compared to the classical plate theory (CPT) and conventional FSDT such as needless of shear correction factor, containing less number of unknowns than the existing FSDT and strong similarities with the CPT. Governing equations and boundary conditions are derived based on Hamilton’s principle using the nonlocal differential constitutive relations of Eringen and von Kármán geometrical model. Numerical results are obtained using differential quadrature (DQ) method and the Newton–Raphson iterative scheme. Finally, some comparison studies are carried out to show the high accuracy and reliability of the present formulations compared to the nonlocal CPT and FSDT for different thicknesses, elastic foundations and nonlocal parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega-Carrillo, Hector Rene; Manzanares-Acuna, Eduardo; Hernandez-Davila, Victor Martin
The use of 131I is widely used in diagnostic and treatment of patients. If the patient is pregnant the 131I presence in the thyroid it becomes a source of constant exposition to other organs and the fetus. In this study the absorbed dose in the uterus of a 3 months pregnant woman with 131I in her thyroid gland has been calculated. The dose was determined using Monte Carlo methods in which a detailed model of the woman has been developed. The dose was also calculated using a simple procedure that was refined including the photons' attenuation in the woman organsmore » and body. To verify these results an experiment was carried out using a neck phantom with 131I. Comparing the results it was found that the simple calculation tend to overestimate the absorbed dose, by doing the corrections due to body and organs photon attenuation the dose is 0.14 times the Monte Carlo estimation.« less
Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices
NASA Astrophysics Data System (ADS)
Passemier, Damien; McKay, Matthew R.; Chen, Yang
2015-07-01
Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.
25+ Years of the Hubble Space Telescope and a Simple Error That Cost Millions
ERIC Educational Resources Information Center
Shakerin, Said
2016-01-01
A simple mistake in properly setting up a measuring device caused millions of dollars to be spent in correcting the initial optical failure of the Hubble Space Telescope (HST). This short article is intended as a lesson for a physics laboratory and discussion of errors in measurement.
SU-F-I-59: Quality Assurance Phantom for PET/CT Alignment and Attenuation Correction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, T; Hamacher, K
2016-06-15
Purpose: This study utilizes a commercial PET/CT phantom to investigate two specific properties of a PET/CT system: the alignment accuracy of PET images with those from CT used for attenuation correction and the accuracy of this correction in PET images. Methods: A commercial PET/CT phantom consisting of three aluminum rods, two long central cylinders containing uniform activity, and attenuating materials such as air, water, bone and iodine contrast was scanned using a standard PET/CT protocol. Images reconstructed with 2 mm slice thickness and a 512 by 512 matrix were obtained. The center of each aluminum rod in the PET andmore » CT images was compared to evaluate alignment accuracy. ROIs were drawn on transaxial images of the central rods at each section of attenuating material to determine the corrected activity (in BQML). BQML values were graphed as a function of slice number to provide a visual representation of the attenuation-correction throughout the whole phantom. Results: Alignment accuracy is high between the PET and CT images. The maximum deviation between the two in the axial plane is less than 1.5 mm, which is less than the width of a single pixel. BQML values measured along different sections of the large central rods are similar among the different attenuating materials except iodine contrast. Deviation of BQML values in the air and bone sections from the water section is less than 1%. Conclusion: Accurate alignment of PET and CT images is critical to ensure proper calculation and application of CT-based attenuation correction. This study presents a simple and quick method to evaluate the two with a single acquisition. As the phantom also includes spheres of increasing diameter, this could serve as a straightforward means to annually evaluate the status of a modern PET/CT system.« less
Second-order singular pertubative theory for gravitational lenses
NASA Astrophysics Data System (ADS)
Alard, C.
2018-03-01
The extension of the singular perturbative approach to the second order is presented in this paper. The general expansion to the second order is derived. The second-order expansion is considered as a small correction to the first-order expansion. Using this approach, it is demonstrated that in practice the second-order expansion is reducible to a first order expansion via a re-definition of the first-order pertubative fields. Even if in usual applications the second-order correction is small the reducibility of the second-order expansion to the first-order expansion indicates a potential degeneracy issue. In general, this degeneracy is hard to break. A useful and simple second-order approximation is the thin source approximation, which offers a direct estimation of the correction. The practical application of the corrections derived in this paper is illustrated by using an elliptical NFW lens model. The second-order pertubative expansion provides a noticeable improvement, even for the simplest case of thin source approximation. To conclude, it is clear that for accurate modelization of gravitational lenses using the perturbative method the second-order perturbative expansion should be considered. In particular, an evaluation of the degeneracy due to the second-order term should be performed, for which the thin source approximation is particularly useful.
Insights into Inpatients with Poor Vision: A High Value Proposition
Press, Valerie G.; Matthiesen, Madeleine I.; Ranadive, Alisha; Hariprasad, Seenu M.; Meltzer, David O.; Arora, Vineet M.
2015-01-01
Background Vision impairment is an under-recognized risk factor for adverse events among hospitalized patients, yet vision is neither routinely tested nor documented for inpatients. Low-cost ($8 and up) non-prescription ‘readers’ may be a simple, high-value intervention to improve inpatients’ vision. We aimed to study initial feasibility and efficacy of screening and correcting inpatients’ vision. Methods From June 2012 through January 2014 we began testing whether participants’ vision corrected with non-prescription lenses for eligible participants failing a vision screen (Snellen chart) performed by research assistants (RAs). Descriptive statistics and tests of comparison, including t-tests and chi-squared tests, were used when appropriate. All analyses were performed using Stata version 12 (StataCorps, College Station, TX). Results Over 800 participants’ vision was screened (n=853). Older (≥65 years; 56%) participants were more likely to have insufficient vision than younger (<65 years; 28%; p<0.001). Non-prescription readers corrected the majority of eligible participants’ vision (82%, 95/116). Discussion Among an easily identified sub-group of inpatients with poor vision, low-cost ‘readers’ successfully corrected most participants’ vision. Hospitalists and other clinicians working in the inpatient setting can play an important role in identifying opportunities to provide high-value care related to patients’ vision. PMID:25755206
Song, Jong-Won; Hirao, Kimihiko
2015-10-14
Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular and periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.
Laparoscopic repair of perforated peptic ulcer: patch versus simple closure.
Abd Ellatif, M E; Salama, A F; Elezaby, A F; El-Kaffas, H F; Hassan, A; Magdy, A; Abdallah, E; El-Morsy, G
2013-01-01
Laparoscopic correction of perforated peptic ulcer (PPU) has become an accepted way of management. Patch omentoplasty stayed for decades the main method of repair. The goal of the present study was to evaluate whether laparoscopic simple repair of PPU is as safe as patch omentoplasty. Since June 2005, 179 consecutive patients of PPU were treated by laparoscopic repair at our centers. We conducted a retrospective chart review in December 2012. Group I (patch group) included patients who were treated with standard patch omentoplasty. Group II (non-patch group) included patients who received simple repair without patch. From June 2007 to Dec. 2012, 179 consecutive patients of PPU who were treated by laparoscopic repair at our centers were enrolled in this multi-center retrospective study. 108 patients belong to patch group. While 71 patients were treated with laparoscopic simple repair. Operative time was significantly shorter in group II (non patch) (p = 0.01). No patient was converted to laparotomy. There was no difference in age, gender, ASA score, surgical risk (Boey's) score, and incidence of co-morbidities. Both groups were comparable in terms of hospital stay, time to resume oral intake, postoperative complications and surgical outcomes. Laparoscopic simple repair of PPU is a safe procedure compared with the traditional patch omentoplasty in presence of certain selection criteria. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank
2018-02-12
Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in signal-to-noise ratio by 50% and the statistical/detection sensitivity by as high as 30% regardless of the downstream mapping and normalization methods. Most importantly, the power-law correction improves concordance in significant calls among different normalization methods of a data series averagely by 22%. When presented with a higher sequence depth (4 times difference), the improvement in concordance is asymmetrical (32% for the higher sequencing depth instance versus 13% for the lower instance) and demonstrates that the simple power-law correction can increase significant detection with higher sequencing depths. Finally, the correction dramatically enhances the statistical conclusions and eludes the metastasis potential of the NUGC3 cell line against AGS of our dilution analysis. The finite-size effects due to undersampling generally plagues transcript count data with reproducibility issues but can be minimized through a simple power-law correction of the count distribution. This distribution correction has direct implication on the biological interpretation of the study and the rigor of the scientific findings. This article was reviewed by Oliviero Carugo, Thomas Dandekar and Sandor Pongor.
An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization
NASA Astrophysics Data System (ADS)
Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc
2002-09-01
A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.
Performance test and image correction of CMOS image sensor in radiation environment
NASA Astrophysics Data System (ADS)
Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang
2016-09-01
CMOS image sensors rival CCDs in domains that include strong radiation resistance as well as simple drive signals, so it is widely applied in the high-energy radiation environment, such as space optical imaging application and video monitoring of nuclear power equipment. However, the silicon material of CMOS image sensors has the ionizing dose effect in the high-energy rays, and then the indicators of image sensors, such as signal noise ratio (SNR), non-uniformity (NU) and bad point (BP) are degraded because of the radiation. The radiation environment of test experiments was generated by the 60Co γ-rays source. The camera module based on image sensor CMV2000 from CMOSIS Inc. was chosen as the research object. The ray dose used for the experiments was with a dose rate of 20krad/h. In the test experiences, the output signals of the pixels of image sensor were measured on the different total dose. The results of data analysis showed that with the accumulation of irradiation dose, SNR of image sensors decreased, NU of sensors was enhanced, and the number of BP increased. The indicators correction of image sensors was necessary, as it was the main factors to image quality. The image processing arithmetic was adopt to the data from the experiences in the work, which combined local threshold method with NU correction based on non-local means (NLM) method. The results from image processing showed that image correction can effectively inhibit the BP, improve the SNR, and reduce the NU.
NASA Astrophysics Data System (ADS)
Johnson, Fiona; Sharma, Ashish
2011-04-01
Empirical scaling approaches for constructing rainfall scenarios from general circulation model (GCM) simulations are commonly used in water resources climate change impact assessments. However, these approaches have a number of limitations, not the least of which is that they cannot account for changes in variability or persistence at annual and longer time scales. Bias correction of GCM rainfall projections offers an attractive alternative to scaling methods as it has similar advantages to scaling in that it is computationally simple, can consider multiple GCM outputs, and can be easily applied to different regions or climatic regimes. In addition, it also allows for interannual variability to evolve according to the GCM simulations, which provides additional scenarios for risk assessments. This paper compares two scaling and four bias correction approaches for estimating changes in future rainfall over Australia and for a case study for water supply from the Warragamba catchment, located near Sydney, Australia. A validation of the various rainfall estimation procedures is conducted on the basis of the latter half of the observational rainfall record. It was found that the method leading to the lowest prediction errors varies depending on the rainfall statistic of interest. The flexibility of bias correction approaches in matching rainfall parameters at different frequencies is demonstrated. The results also indicate that for Australia, the scaling approaches lead to smaller estimates of uncertainty associated with changes to interannual variability for the period 2070-2099 compared to the bias correction approaches. These changes are also highlighted using the case study for the Warragamba Dam catchment.
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
NASA Astrophysics Data System (ADS)
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
Non-contact method of search and analysis of pulsating vessels
NASA Astrophysics Data System (ADS)
Avtomonov, Yuri N.; Tsoy, Maria O.; Postnov, Dmitry E.
2018-04-01
Despite the variety of existing methods of recording the human pulse and a solid history of their development, there is still considerable interest in this topic. The development of new non-contact methods, based on advanced image processing, caused a new wave of interest in this issue. We present a simple but quite effective method for analyzing the mechanical pulsations of blood vessels lying close to the surface of the skin. Our technique is a modification of imaging (or remote) photoplethysmography (i-PPG). We supplemented this method with the addition of a laser light source, which made it possible to use other methods of searching for the proposed pulsation zone. During the testing of the method, several series of experiments were carried out with both artificial oscillating objects as well as with the target signal source (human wrist). The obtained results show that our method allows correct interpretation of complex data. To summarize, we proposed and tested an alternative method for the search and analysis of pulsating vessels.
Improved Kalman Filter Method for Measurement Noise Reduction in Multi Sensor RFID Systems
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available Radio Frequency Identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less Mean Squared Error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments. PMID:22346641
Improved Kalman filter method for measurement noise reduction in multi sensor RFID systems.
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available radio frequency identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less mean squared error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments.
Correction of accessory axillary breast tissue without visible scar.
Kim, Young Soo
2004-01-01
Various methods for correction of accessory axillary breast tissue have been proposed, including simple excision, diamond-shaped excision, a Y-V technique, and lipoplasty. We present an effective method for correction of a prominent axillary mound that combines lipoplasty with excision of accessory breast tissue along the axillary transverse line. Preoperative markings included an incision within the natural wrinkle line in the axillary fold, and demarcation of areas in which lipoplasty and excision were to be performed. After lipoplasty, deep dissection was performed to isolate and remove accessory breast tissue and excess fat tissue. A compression dressing was applied for 1 to 2 weeks postoperatively, and the patient was instructed to wear a sports bra for 1 to 2 months after removal of the dressing. We treated 7 patients using this procedure between October 1999 and March 2003. No major postoperative complications were detected and recurrence was not noted during the follow-up periods. Aesthetic results were satisfactory. We believe that a procedure that combines lipoplasty and excision provides numerous advantages as a surgical option in treating a prominent axillary mound. The main advantage is that the final scar is laid in the natural axillary fold, rendering scars less conspicuous and eliminating the need to remove excess skin. The one disadvantage was that elevation of the skin flap via small, remote incisions initially produced surgical difficulties, but these were overcome with experience.
Rapid correction of electron microprobe data for multicomponent metallic systems
NASA Technical Reports Server (NTRS)
Gupta, K. P.; Sivakumar, R.
1973-01-01
This paper describes an empirical relation for the correction of electron microprobe data for multicomponent metallic systems. It evaluates the empirical correction parameter, a for each element in a binary alloy system using a modification of Colby's MAGIC III computer program and outlines a simple and quick way of correcting the probe data. This technique has been tested on a number of multicomponent metallic systems and the agreement with the results using theoretical expressions is found to be excellent. Limitations and suitability of this relation are discussed and a model calculation is also presented in the Appendix.
Gurka, Matthew J; Kuperminc, Michelle N; Busby, Marjorie G; Bennis, Jacey A; Grossberg, Richard I; Houlihan, Christine M; Stevenson, Richard D; Henderson, Richard C
2010-02-01
To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I-V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. Slaughter's equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat -9.6/100 [SD 6.2]; 95% confidence interval [CI] -11.0 to -8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI -1.0 to 1.3) than existing equations. A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP.
Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas
2014-01-01
A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.
TMSEG: Novel prediction of transmembrane helices.
Bernhofer, Michael; Kloppmann, Edda; Reeb, Jonas; Rost, Burkhard
2016-11-01
Transmembrane proteins (TMPs) are important drug targets because they are essential for signaling, regulation, and transport. Despite important breakthroughs, experimental structure determination remains challenging for TMPs. Various methods have bridged the gap by predicting transmembrane helices (TMHs), but room for improvement remains. Here, we present TMSEG, a novel method identifying TMPs and accurately predicting their TMHs and their topology. The method combines machine learning with empirical filters. Testing it on a non-redundant dataset of 41 TMPs and 285 soluble proteins, and applying strict performance measures, TMSEG outperformed the state-of-the-art in our hands. TMSEG correctly distinguished helical TMPs from other proteins with a sensitivity of 98 ± 2% and a false positive rate as low as 3 ± 1%. Individual TMHs were predicted with a precision of 87 ± 3% and recall of 84 ± 3%. Furthermore, in 63 ± 6% of helical TMPs the placement of all TMHs and their inside/outside topology was correctly predicted. There are two main features that distinguish TMSEG from other methods. First, the errors in finding all helical TMPs in an organism are significantly reduced. For example, in human this leads to 200 and 1600 fewer misclassifications compared to the second and third best method available, and 4400 fewer mistakes than by a simple hydrophobicity-based method. Second, TMSEG provides an add-on improvement for any existing method to benefit from. Proteins 2016; 84:1706-1716. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Efficient quantum pseudorandomness with simple graph states
NASA Astrophysics Data System (ADS)
Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian
2018-02-01
Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.
Electrode effects in dielectric spectroscopy of colloidal suspensions
NASA Astrophysics Data System (ADS)
Cirkel, P. A.; van der Ploeg, J. P. M.; Koper, G. J. M.
1997-02-01
We present a simple model to account for electrode polarization in colloidal suspensions. Apart from correctly predicting the ω {-3}/{2} dependence for the dielectric permittivity at low frequencies ω, the model provides an explicit dependence of the effect on electrode spacing. The predictions are tested for the sodium bis(2-ethylhexyl) sulfosuccinate (AOT) water-in-oil microemulsion with iso-octane as continuous phase. In particular, the dependence of electrode polarization effects on electrode spacing has been measured and is found to be in accordance with the model prediction. Methods to reduce or account for electrode polarization are briefly discussed.
Coupled-cluster treatment of molecular strong-field ionization
NASA Astrophysics Data System (ADS)
Jagau, Thomas-C.
2018-05-01
Ionization rates and Stark shifts of H2, CO, O2, H2O, and CH4 in static electric fields have been computed with coupled-cluster methods in a basis set of atom-centered Gaussian functions with a complex-scaled exponent. Consideration of electron correlation is found to be of great importance even for a qualitatively correct description of the dependence of ionization rates and Stark shifts on the strength and orientation of the external field. The analysis of the second moments of the molecular charge distribution suggests a simple criterion for distinguishing tunnel and barrier suppression ionization in polyatomic molecules.