Sample records for ratio based method

  1. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  2. Retrieving the aerosol lidar ratio profile by combining ground- and space-based elastic lidars.

    PubMed

    Feiyue, Mao; Wei, Gong; Yingying, Ma

    2012-02-15

    The aerosol lidar ratio is a key parameter for the retrieval of aerosol optical properties from elastic lidar, which changes largely for aerosols with different chemical and physical properties. We proposed a method for retrieving the aerosol lidar ratio profile by combining simultaneous ground- and space-based elastic lidars. The method was tested by a simulated case and a real case at 532 nm wavelength. The results demonstrated that our method is robust and can obtain accurate lidar ratio and extinction coefficient profiles. Our method can be useful for determining the local and global lidar ratio and validating space-based lidar datasets.

  3. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  4. 13C-Labeled Gluconate Tracing as a Direct and Accurate Method for Determining the Pentose Phosphate Pathway Split Ratio in Penicillium chrysogenum

    PubMed Central

    Kleijn, Roelco J.; van Winden, Wouter A.; Ras, Cor; van Gulik, Walter M.; Schipper, Dick; Heijnen, Joseph J.

    2006-01-01

    In this study we developed a new method for accurately determining the pentose phosphate pathway (PPP) split ratio, an important metabolic parameter in the primary metabolism of a cell. This method is based on simultaneous feeding of unlabeled glucose and trace amounts of [U-13C]gluconate, followed by measurement of the mass isotopomers of the intracellular metabolites surrounding the 6-phosphogluconate node. The gluconate tracer method was used with a penicillin G-producing chemostat culture of the filamentous fungus Penicillium chrysogenum. For comparison, a 13C-labeling-based metabolic flux analysis (MFA) was performed for glycolysis and the PPP of P. chrysogenum. For the first time mass isotopomer measurements of 13C-labeled primary metabolites are reported for P. chrysogenum and used for a 13C-based MFA. Estimation of the PPP split ratio of P. chrysogenum at a growth rate of 0.02 h−1 yielded comparable values for the gluconate tracer method and the 13C-based MFA method, 51.8% and 51.1%, respectively. A sensitivity analysis of the estimated PPP split ratios showed that the 95% confidence interval was almost threefold smaller for the gluconate tracer method than for the 13C-based MFA method (40.0 to 63.5% and 46.0 to 56.5%, respectively). From these results we concluded that the gluconate tracer method permits accurate determination of the PPP split ratio but provides no information about the remaining cellular metabolism, while the 13C-based MFA method permits estimation of multiple fluxes but provides a less accurate estimate of the PPP split ratio. PMID:16820467

  5. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-10-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  6. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-06-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  7. Aerosol characteristics inversion based on the improved lidar ratio profile with the ground-based rotational Raman-Mie lidar

    NASA Astrophysics Data System (ADS)

    Ji, Hongzhu; Zhang, Yinchao; Chen, Siying; Chen, He; Guo, Pan

    2018-06-01

    An iterative method, based on a derived inverse relationship between atmospheric backscatter coefficient and aerosol lidar ratio, is proposed to invert the lidar ratio profile and aerosol extinction coefficient. The feasibility of this method is investigated theoretically and experimentally. Simulation results show the inversion accuracy of aerosol optical properties for iterative method can be improved in the near-surface aerosol layer and the optical thick layer. Experimentally, as a result of the reduced insufficiency error and incoherence error, the aerosol optical properties with higher accuracy can be obtained in the near-surface region and the region of numerical derivative distortion. In addition, the particle component can be distinguished roughly based on this improved lidar ratio profile.

  8. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors

    PubMed Central

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  9. Significance of parametric spectral ratio methods in detection and recognition of whispered speech

    NASA Astrophysics Data System (ADS)

    Mathur, Arpit; Reddy, Shankar M.; Hegde, Rajesh M.

    2012-12-01

    In this article the significance of a new parametric spectral ratio method that can be used to detect whispered speech segments within normally phonated speech is described. Adaptation methods based on the maximum likelihood linear regression (MLLR) are then used to realize a mismatched train-test style speech recognition system. This proposed parametric spectral ratio method computes a ratio spectrum of the linear prediction (LP) and the minimum variance distortion-less response (MVDR) methods. The smoothed ratio spectrum is then used to detect whispered segments of speech within neutral speech segments effectively. The proposed LP-MVDR ratio method exhibits robustness at different SNRs as indicated by the whisper diarization experiments conducted on the CHAINS and the cell phone whispered speech corpus. The proposed method also performs reasonably better than the conventional methods for whisper detection. In order to integrate the proposed whisper detection method into a conventional speech recognition engine with minimal changes, adaptation methods based on the MLLR are used herein. The hidden Markov models corresponding to neutral mode speech are adapted to the whispered mode speech data in the whispered regions as detected by the proposed ratio method. The performance of this method is first evaluated on whispered speech data from the CHAINS corpus. The second set of experiments are conducted on the cell phone corpus of whispered speech. This corpus is collected using a set up that is used commercially for handling public transactions. The proposed whisper speech recognition system exhibits reasonably better performance when compared to several conventional methods. The results shown indicate the possibility of a whispered speech recognition system for cell phone based transactions.

  10. Statistical method evaluation for differentially methylated CpGs in base resolution next-generation DNA sequencing data.

    PubMed

    Zhang, Yun; Baheti, Saurabh; Sun, Zhifu

    2018-05-01

    High-throughput bisulfite methylation sequencing such as reduced representation bisulfite sequencing (RRBS), Agilent SureSelect Human Methyl-Seq (Methyl-seq) or whole-genome bisulfite sequencing is commonly used for base resolution methylome research. These data are represented either by the ratio of methylated cytosine versus total coverage at a CpG site or numbers of methylated and unmethylated cytosines. Multiple statistical methods can be used to detect differentially methylated CpGs (DMCs) between conditions, and these methods are often the base for the next step of differentially methylated region identification. The ratio data have a flexibility of fitting to many linear models, but the raw count data take consideration of coverage information. There is an array of options in each datatype for DMC detection; however, it is not clear which is an optimal statistical method. In this study, we systematically evaluated four statistic methods on methylation ratio data and four methods on count-based data and compared their performances with regard to type I error control, sensitivity and specificity of DMC detection and computational resource demands using real RRBS data along with simulation. Our results show that the ratio-based tests are generally more conservative (less sensitive) than the count-based tests. However, some count-based methods have high false-positive rates and should be avoided. The beta-binomial model gives a good balance between sensitivity and specificity and is preferred method. Selection of methods in different settings, signal versus noise and sample size estimation are also discussed.

  11. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  12. Comparative study between recent methods manipulating ratio spectra and classical methods based on two-wavelength selection for the determination of binary mixture of antazoline hydrochloride and tetryzoline hydrochloride

    NASA Astrophysics Data System (ADS)

    Abdel-Halim, Lamia M.; Abd-El Rahman, Mohamed K.; Ramadan, Nesrin K.; EL Sanabary, Hoda F. A.; Salem, Maissa Y.

    2016-04-01

    A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λmax (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories.

  13. A new method to measure Bowen ratios using high-resolution vertical dry and wet bulb temperature profiles

    NASA Astrophysics Data System (ADS)

    Euser, T.; Luxemburg, W. M. J.; Everson, C. S.; Mengistu, M. G.; Clulow, A. D.; Bastiaanssen, W. G. M.

    2014-06-01

    The Bowen ratio surface energy balance method is a relatively simple method to determine the latent heat flux and the actual land surface evaporation. The Bowen ratio method is based on the measurement of air temperature and vapour pressure gradients. If these measurements are performed at only two heights, correctness of data becomes critical. In this paper we present the concept of a new measurement method to estimate the Bowen ratio based on vertical dry and wet bulb temperature profiles with high spatial resolution. A short field experiment with distributed temperature sensing (DTS) in a fibre optic cable with 13 measurement points in the vertical was undertaken. A dry and a wetted section of a fibre optic cable were suspended on a 6 m high tower installed over a sugar beet trial plot near Pietermaritzburg (South Africa). Using the DTS cable as a psychrometer, a near continuous observation of vapour pressure and air temperature at 0.20 m intervals was established. These data allowed the computation of the Bowen ratio with a high spatial and temporal precision. The daytime latent and sensible heat fluxes were estimated by combining the Bowen ratio values from the DTS-based system with independent measurements of net radiation and soil heat flux. The sensible heat flux, which is the relevant term to evaluate, derived from the DTS-based Bowen ratio (BR-DTS) was compared with that derived from co-located eddy covariance (R2 = 0.91), surface layer scintillometer (R2 = 0.81) and surface renewal (R2 = 0.86) systems. By using multiple measurement points instead of two, more confidence in the derived Bowen ratio values is obtained.

  14. Dynamic Responses of Flexible Cylinders with Low Mass Ratio

    NASA Astrophysics Data System (ADS)

    Olaoye, Abiodun; Wang, Zhicheng; Triantafyllou, Michael

    2017-11-01

    Flexible cylinders with low mass ratios such as composite risers are attractive in the offshore industry because they require lower top tension and are less likely to buckle under self-weight compared to steel risers. However, their relatively low stiffness characteristics make them more vulnerable to vortex induced vibrations. Additionally, numerical investigation of the dynamic responses of such structures based on realistic conditions is limited by high Reynolds number, complex sheared flow profile, large aspect ratio and low mass ratio challenges. In the framework of Fourier spectral/hp element method, the current technique employs entropy-viscosity method (EVM) based large-eddy simulation approach for flow solver and fictitious added mass method for structure solver. The combination of both methods can handle fluid-structure interaction problems at high Reynolds number with low mass ratio. A validation of the numerical approach is provided by comparison with experiments.

  15. New decision criteria for selecting delta check methods based on the ratio of the delta difference to the width of the reference range can be generally applicable for each clinical chemistry test item.

    PubMed

    Park, Sang Hyuk; Kim, So-Young; Lee, Woochang; Chun, Sail; Min, Won-Ki

    2012-09-01

    Many laboratories use 4 delta check methods: delta difference, delta percent change, rate difference, and rate percent change. However, guidelines regarding decision criteria for selecting delta check methods have not yet been provided. We present new decision criteria for selecting delta check methods for each clinical chemistry test item. We collected 811,920 and 669,750 paired (present and previous) test results for 27 clinical chemistry test items from inpatients and outpatients, respectively. We devised new decision criteria for the selection of delta check methods based on the ratio of the delta difference to the width of the reference range (DD/RR). Delta check methods based on these criteria were compared with those based on the CV% of the absolute delta difference (ADD) as well as those reported in 2 previous studies. The delta check methods suggested by new decision criteria based on the DD/RR ratio corresponded well with those based on the CV% of the ADD except for only 2 items each in inpatients and outpatients. Delta check methods based on the DD/RR ratio also corresponded with those suggested in the 2 previous studies, except for 1 and 7 items in inpatients and outpatients, respectively. The DD/RR method appears to yield more feasible and intuitive selection criteria and can easily explain changes in the results by reflecting both the biological variation of the test item and the clinical characteristics of patients in each laboratory. We suggest this as a measure to determine delta check methods.

  16. Simultaneous determination of mebeverine hydrochloride and chlordiazepoxide in their binary mixture using novel univariate spectrophotometric methods via different manipulation pathways.

    PubMed

    Lotfy, Hayam M; Fayez, Yasmin M; Michael, Adel M; Nessim, Christine K

    2016-02-15

    Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD(1)) or second derivative (D(2)). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28μg/mL for mebeverine hydrochloride and 1-12μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Simultaneous determination of mebeverine hydrochloride and chlordiazepoxide in their binary mixture using novel univariate spectrophotometric methods via different manipulation pathways

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Fayez, Yasmin M.; Michael, Adel M.; Nessim, Christine K.

    2016-02-01

    Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD1) or second derivative (D2). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28 μg/mL for mebeverine hydrochloride and 1-12 μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.

  18. Auxiliary drying to prevent pattern collapse in high aspect ratio nanostructures

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Zhou, Jie; Xiong, Ying; Zhang, Xiaobo; Tian, Yangchao

    2011-07-01

    Many defects are generated in densely packed high aspect ratio structures during nanofabrication. Pattern collapse is one of the serious problems that may arise, mainly due to the capillary force during drying after the rinsing process. In this paper, a method of auxiliary drying is presented to prevent pattern collapse in high aspect ratio nanostructures by adding an auxiliary substrate as a reinforcing rib to restrict deformation and to balance the capillary force. The principle of the method is presented based on the analysis of pattern collapse. A finite element method is then applied to analyze the deformation of the resist beams caused by the surface tension using the ANSYS software, and the effect of the nanostructure's length to width ratio simulated and analyzed. Finally, the possible range of applications based on the proposed method is discussed. Our results show that the aspect ratio may be increased 2.6 times without pattern collapse; furthermore, this method can be widely used in the removal of solvents in micro- and nanofabrication.

  19. Auxiliary drying to prevent pattern collapse in high aspect ratio nanostructures.

    PubMed

    Liu, Gang; Zhou, Jie; Xiong, Ying; Zhang, Xiaobo; Tian, Yangchao

    2011-07-29

    Many defects are generated in densely packed high aspect ratio structures during nanofabrication. Pattern collapse is one of the serious problems that may arise, mainly due to the capillary force during drying after the rinsing process. In this paper, a method of auxiliary drying is presented to prevent pattern collapse in high aspect ratio nanostructures by adding an auxiliary substrate as a reinforcing rib to restrict deformation and to balance the capillary force. The principle of the method is presented based on the analysis of pattern collapse. A finite element method is then applied to analyze the deformation of the resist beams caused by the surface tension using the ANSYS software, and the effect of the nanostructure's length to width ratio simulated and analyzed. Finally, the possible range of applications based on the proposed method is discussed. Our results show that the aspect ratio may be increased 2.6 times without pattern collapse; furthermore, this method can be widely used in the removal of solvents in micro- and nanofabrication.

  20. DNA Base-Calling from a Nanopore Using a Viterbi Algorithm

    PubMed Central

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-01-01

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (∼98%), even with a poor signal/noise ratio. PMID:22677395

  1. Possible incorporation of petroleum-based carbons in biochemicals produced by bioprocess--biomass carbon ratio measured by accelerator mass spectrometry.

    PubMed

    Kunioka, Masao

    2010-06-01

    The biomass carbon ratios of biochemicals related to biomass have been reviewed. Commercial products from biomass were explained. The biomass carbon ratios of biochemical compounds were measured by accelerator mass spectrometry (AMS) based on the (14)C concentration of carbons in the compounds. This measuring method uses the mechanism that biomass carbons include a very low level of (14)C and petroleum carbons do not include (14)C similar to the carbon dating measuring method. It was confirmed that there were some biochemicals synthesized from petroleum-based carbons. This AMS method has a high accuracy with a small standard deviation and can be applied to plastic products.

  2. Investigating the impact of the properties of pilot points on calibration of groundwater models: case study of a karst catchment in Rote Island, Indonesia

    NASA Astrophysics Data System (ADS)

    Klaas, Dua K. S. Y.; Imteaz, Monzur Alam

    2017-09-01

    A robust configuration of pilot points in the parameterisation step of a model is crucial to accurately obtain a satisfactory model performance. However, the recommendations provided by the majority of recent researchers on pilot-point use are considered somewhat impractical. In this study, a practical approach is proposed for using pilot-point properties (i.e. number, distance and distribution method) in the calibration step of a groundwater model. For the first time, the relative distance-area ratio ( d/ A) and head-zonation-based (HZB) method are introduced, to assign pilot points into the model domain by incorporating a user-friendly zone ratio. This study provides some insights into the trade-off between maximising and restricting the number of pilot points, and offers a relative basis for selecting the pilot-point properties and distribution method in the development of a physically based groundwater model. The grid-based (GB) method is found to perform comparably better than the HZB method in terms of model performance and computational time. When using the GB method, this study recommends a distance-area ratio of 0.05, a distance-x-grid length ratio ( d/ X grid) of 0.10, and a distance-y-grid length ratio ( d/ Y grid) of 0.20.

  3. Measuring signal-to-noise ratio automatically

    NASA Technical Reports Server (NTRS)

    Bergman, L. A.; Johnston, A. R.

    1980-01-01

    Automated method of measuring signal-to-noise ratio in digital communication channels is more precise and 100 times faster than previous methods used. Method based on bit-error-rate (B&R) measurement can be used with cable, microwave radio, or optical links.

  4. Development of a Pancake-Making Method for a Batter-Based Product

    USDA-ARS?s Scientific Manuscript database

    Cake and pancake are major batter-based products made with soft wheat flour. A standardized baking method for high-ratio cake has been widely used for evaluating the cake-baking performance of soft wheat flour. Chlorinated flour is used to make high-ratio cake, and the cake formula contains relative...

  5. LASER BEAMS: On alternative methods for measuring the radius and propagation ratio of axially symmetric laser beams

    NASA Astrophysics Data System (ADS)

    Dementjev, Aleksandr S.; Jovaisa, A.; Silko, Galina; Ciegis, Raimondas

    2005-11-01

    Based on the developed efficient numerical methods for calculating the propagation of light beams, the alternative methods for measuring the beam radius and propagation ratio proposed in the international standard ISO 11146 are analysed. The specific calculations of the alternative beam propagation ratios Mi2 performed for a number of test beams with a complicated spatial structure showed that the correlation coefficients ci used in the international standard do not establish the universal one-to-one relation between the alternative propagation ratios Mi2 and invariant propagation ratios Mσ2 found by the method of moments.

  6. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less

  7. Vision Based Obstacle Detection in Uav Imaging

    NASA Astrophysics Data System (ADS)

    Badrloo, S.; Varshosaz, M.

    2017-08-01

    Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.

  8. Polarization ratio property and material classification method in passive millimeter wave polarimetric imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Yayun; Qi, Bo; Liu, Siyuan; Hu, Fei; Gui, Liangqi; Peng, Xiaohui

    2016-10-01

    Polarimetric measurements can provide additional information as compared to unpolarized ones. In this paper, linear polarization ratio (LPR) is created to be a feature discriminator. The LPR properties of several materials are investigated using Fresnel theory. The theoretical results show that LPR is sensitive to the material type (metal or dielectric). Then a linear polarization ratio-based (LPR-based) method is presented to distinguish between metal and dielectric materials. In order to apply this method to practical applications, the optimal range of incident angle have been discussed. The typical outdoor experiments including various objects such as aluminum plate, grass, concrete, soil and wood, have been conducted to validate the presented classification method.

  9. Comparative study between recent methods manipulating ratio spectra and classical methods based on two-wavelength selection for the determination of binary mixture of antazoline hydrochloride and tetryzoline hydrochloride.

    PubMed

    Abdel-Halim, Lamia M; Abd-El Rahman, Mohamed K; Ramadan, Nesrin K; El Sanabary, Hoda F A; Salem, Maissa Y

    2016-04-15

    A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λ(max) (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. DNA base-calling from a nanopore using a Viterbi algorithm.

    PubMed

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-05-16

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (~98%), even with a poor signal/noise ratio. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  11. Determining the slag fraction, water/binder ratio and degree of hydration in hardened cement pastes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yio, M.H.N., E-mail: marcus.yio11@imperial.ac.uk; Phelan, J.C.; Wong, H.S.

    2014-02-15

    A method for determining the original mix composition of hardened slag-blended cement-based materials based on analysis of backscattered electron images combined with loss on ignition measurements is presented. The method does not require comparison to reference standards or prior knowledge of the composition of the binders used. Therefore, it is well-suited for application to real structures. The method is also able to calculate the degrees of reaction of slag and cement. Results obtained from an experimental study involving sixty samples with a wide range of water/binder (w/b) ratios (0.30 to 0.50), slag/binder ratios (0 to 0.6) and curing ages (3more » days to 1 year) show that the method is very promising. The mean absolute errors for the estimated slag, water and cement contents (kg/m{sup 3}), w/b and s/b ratios were 9.1%, 1.5%, 2.5%, 4.7% and 8.7%, respectively. 91% of the estimated w/b ratios were within 0.036 of the actual values. -- Highlights: •A new method for estimating w/b ratio and slag content in cement pastes is proposed. •The method is also able to calculate the degrees of reaction of slag and cement. •Reference standards or prior knowledge of the binder composition are not required. •The method was tested on samples with varying w/b ratios and slag content.« less

  12. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  13. The Sequential Probability Ratio Test and Binary Item Response Models

    ERIC Educational Resources Information Center

    Nydick, Steven W.

    2014-01-01

    The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…

  14. Multiratio fusion change detection with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.

    2017-04-01

    A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.

  15. Spectral ratio method for measuring emissivity

    USGS Publications Warehouse

    Watson, K.

    1992-01-01

    The spectral ratio method is based on the concept that although the spectral radiances are very sensitive to small changes in temperature the ratios are not. Only an approximate estimate of temperature is required thus, for example, we can determine the emissivity ratio to an accuracy of 1% with a temperature estimate that is only accurate to 12.5 K. Selecting the maximum value of the channel brightness temperatures is an unbiased estimate. Laboratory and field spectral data are easily converted into spectral ratio plots. The ratio method is limited by system signal:noise and spectral band-width. The images can appear quite noisy because ratios enhance high frequencies and may require spatial filtering. Atmospheric effects tend to rescale the ratios and require using an atmospheric model or a calibration site. ?? 1992.

  16. Dosimetry and prescription in liver radioembolization with 90Y microspheres: 3D calculation of tumor-to-liver ratio from global 99mTc-MAA SPECT information

    NASA Astrophysics Data System (ADS)

    Mañeru, Fernando; Abós, Dolores; Bragado, Laura; Fuentemilla, Naiara; Caudepón, Fernando; Pellejero, Santiago; Miquelez, Santiago; Rubio, Anastasio; Goñi, Elena; Hernández-Vitoria, Araceli

    2017-12-01

    Dosimetry in liver radioembolization with 90Y microspheres is a fundamental tool, both for the optimization of each treatment and for improving knowledge of the treatment effects in the tissues. Different options are available for estimating the administered activity and the tumor/organ dose, among them the so-called partition method. The key factor in the partition method is the tumor/normal tissue activity uptake ratio (T/N), which is obtained by a single-photon emission computed tomography (SPECT) scan during a pre-treatment simulation. The less clear the distinction between healthy and tumor parenchyma within the liver, the more difficult it becomes to estimate the T/N ratio; therefore the use of the method is limited. This study presents a methodology to calculate the T/N ratio using global information from the SPECT. The T/N ratio is estimated by establishing uptake thresholds consistent with previously performed volumetry. This dose calculation method was validated against 3D voxel dosimetry, and was also compared with the standard partition method based on freehand regions of interest (ROI) outlining on SPECT slices. Both comparisons were done on a sample of 20 actual cases of hepatocellular carcinoma treated with resin microspheres. The proposed method and the voxel dosimetry method yield similar results, while the ROI-based method tends to over-estimate the dose to normal tissues. In addition, the variability associated with the ROI-based method is more extreme than the other methods. The proposed method is simpler than either the ROI or voxel dosimetry approaches and avoids the subjectivity associated with the manual selection of regions.

  17. Automation of Classical QEEG Trending Methods for Early Detection of Delayed Cerebral Ischemia: More Work to Do.

    PubMed

    Wickering, Ellis; Gaspard, Nicolas; Zafar, Sahar; Moura, Valdery J; Biswal, Siddharth; Bechek, Sophia; OʼConnor, Kathryn; Rosenthal, Eric S; Westover, M Brandon

    2016-06-01

    The purpose of this study is to evaluate automated implementations of continuous EEG monitoring-based detection of delayed cerebral ischemia based on methods used in classical retrospective studies. We studied 95 patients with either Fisher 3 or Hunt Hess 4 to 5 aneurysmal subarachnoid hemorrhage who were admitted to the Neurosciences ICU and underwent continuous EEG monitoring. We implemented several variations of two classical algorithms for automated detection of delayed cerebral ischemia based on decreases in alpha-delta ratio and relative alpha variability. Of 95 patients, 43 (45%) developed delayed cerebral ischemia. Our automated implementation of the classical alpha-delta ratio-based trending method resulted in a sensitivity and specificity (Se,Sp) of (80,27)%, compared with the values of (100,76)% reported in the classic study using similar methods in a nonautomated fashion. Our automated implementation of the classical relative alpha variability-based trending method yielded (Se,Sp) values of (65,43)%, compared with (100,46)% reported in the classic study using nonautomated analysis. Our findings suggest that improved methods to detect decreases in alpha-delta ratio and relative alpha variability are needed before an automated EEG-based early delayed cerebral ischemia detection system is ready for clinical use.

  18. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  19. A Feature Selection Method Based on Fisher's Discriminant Ratio for Text Sentiment Classification

    NASA Astrophysics Data System (ADS)

    Wang, Suge; Li, Deyu; Wei, Yingjie; Li, Hongxia

    With the rapid growth of e-commerce, product reviews on the Web have become an important information source for customers' decision making when they intend to buy some product. As the reviews are often too many for customers to go through, how to automatically classify them into different sentiment orientation categories (i.e. positive/negative) has become a research problem. In this paper, based on Fisher's discriminant ratio, an effective feature selection method is proposed for product review text sentiment classification. In order to validate the validity of the proposed method, we compared it with other methods respectively based on information gain and mutual information while support vector machine is adopted as the classifier. In this paper, 6 subexperiments are conducted by combining different feature selection methods with 2 kinds of candidate feature sets. Under 1006 review documents of cars, the experimental results indicate that the Fisher's discriminant ratio based on word frequency estimation has the best performance with F value 83.3% while the candidate features are the words which appear in both positive and negative texts.

  20. Manipulating Ratio Spectra for the Spectrophotometric Analysis of Diclofenac Sodium and Pantoprazole Sodium in Laboratory Mixtures and Tablet Formulation

    PubMed Central

    Bhatt, Nejal M.; Chavada, Vijay D.; Sanyal, Mallika; Shrivastav, Pranav S.

    2014-01-01

    Objective. Three sensitive, selective, and precise spectrophotometric methods based on manipulation of ratio spectra, have been developed and validated for the determination of diclofenac sodium and pantoprazole sodium. Materials and Methods. The first method is based on ratio spectra peak to peak measurement using the amplitudes at 251 and 318 nm; the second method involves the first derivative of the ratio spectra (Δλ = 4 nm) using the peak amplitudes at 326.0 nm for diclofenac sodium and 337.0 nm for pantoprazole sodium. The third is the method of mean centering of ratio spectra using the values at 318.0 nm for both the analytes. Results. All the three methods were linear over the concentration range of 2.0–24.0 μg/mL for diclofenac sodium and 2.0–20.0 μg/mL for pantoprazole sodium. The methods were validated according to the ICH guidelines and accuracy, precision, repeatability, and robustness are found to be within the acceptable limit. The results of single factor ANOVA analysis indicated that there is no significant difference among the developed methods. Conclusions. The developed methods provided simple resolution of this binary combination from laboratory mixtures and pharmaceutical preparations and can be conveniently adopted for routine quality control analysis. PMID:24701171

  1. New non-invasive method for early detection of metabolic syndrome in the working population.

    PubMed

    Romero-Saldaña, Manuel; Fuentes-Jiménez, Francisco J; Vaquero-Abellán, Manuel; Álvarez-Fernández, Carlos; Molina-Recio, Guillermo; López-Miranda, José

    2016-12-01

    We propose a new method for the early detection of metabolic syndrome in the working population, which was free of biomarkers (non-invasive) and based on anthropometric variables, and to validate it in a new working population. Prevalence studies and diagnostic test accuracy to determine the anthropometric variables associated with metabolic syndrome, as well as the screening validity of the new method proposed, were carried out between 2013 and 2015 on 636 and 550 workers, respectively. The anthropometric variables analysed were: blood pressure, body mass index, waist circumference, waist-height ratio, body fat percentage and waist-hip ratio. We performed a multivariate logistic regression analysis and obtained receiver operating curves to determine the predictive ability of the variables. The new method for the early detection of metabolic syndrome we present is based on a decision tree using chi-squared automatic interaction detection methodology. The overall prevalence of metabolic syndrome was 14.9%. The area under the curve for waist-height ratio and waist circumference was 0.91 and 0.90, respectively. The anthropometric variables associated with metabolic syndrome in the adjusted model were waist-height ratio, body mass index, blood pressure and body fat percentage. The decision tree was configured from the waist-height ratio (⩾0.55) and hypertension (blood pressure ⩾128/85 mmHg), with a sensitivity of 91.6% and a specificity of 95.7% obtained. The early detection of metabolic syndrome in a healthy population is possible through non-invasive methods, based on anthropometric indicators such as waist-height ratio and blood pressure. This method has a high degree of predictive validity and its use can be recommended in any healthcare context. © The European Society of Cardiology 2016.

  2. Systems Biology and Ratio-Based, Real-Time Disease Surveillance.

    PubMed

    Fair, J M; Rivas, A L

    2015-08-01

    Most infectious disease surveillance methods are not well fit for early detection. To address such limitation, here we evaluated a ratio- and Systems Biology-based method that does not require prior knowledge on the identity of an infective agent. Using a reference group of birds experimentally infected with West Nile virus (WNV) and a problem group of unknown health status (except that they were WNV-negative and displayed inflammation), both groups were followed over 22 days and tested with a system that analyses blood leucocyte ratios. To test the ability of the method to discriminate small data sets, both the reference group (n = 5) and the problem group (n = 4) were small. The questions of interest were as follows: (i) whether individuals presenting inflammation (disease-positive or D+) can be distinguished from non-inflamed (disease-negative or D-) birds, (ii) whether two or more D+ stages can be detected and (iii) whether sample size influences detection. Within the problem group, the ratio-based method distinguished the following: (i) three (one D- and two D+) data classes; (ii) two (early and late) inflammatory stages; (iii) fast versus regular or slow responders; and (iv) individuals that recovered from those that remained inflamed. Because ratios differed in larger magnitudes (up to 48 times larger) than percentages, it is suggested that data patterns are likely to be recognized when disease surveillance methods are designed to measure inflammation and utilize ratios. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  3. Concurrent topological design of composite structures and materials containing multiple phases of distinct Poisson's ratios

    NASA Astrophysics Data System (ADS)

    Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min

    2018-04-01

    Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.

  4. 77 FR 46447 - Proposed Fair Market Rents for the Housing Choice Voucher Program and Moderate Rehabilitation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-03

    ... ratios is very similar to the method used when the bedroom ratios were based on 2000 decennial census...'' section. There are two methods for submitting public comments. 1. Submission of Comments by Mail. Comments... submitted through one of the two methods specified above. Again, all submissions must refer to the docket...

  5. An Empirical Comparison of DDF Detection Methods for Understanding the Causes of DIF in Multiple-Choice Items

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Talley, Anna E.

    2015-01-01

    This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…

  6. Modular continuous wavelet processing of biosignals: extracting heart rate and oxygen saturation from a video signal

    PubMed Central

    2016-01-01

    A novel method of extracting heart rate and oxygen saturation from a video-based biosignal is described. The method comprises a novel modular continuous wavelet transform approach which includes: performing the transform, undertaking running wavelet archetyping to enhance the pulse information, extraction of the pulse ridge time–frequency information [and thus a heart rate (HRvid) signal], creation of a wavelet ratio surface, projection of the pulse ridge onto the ratio surface to determine the ratio of ratios from which a saturation trending signal is derived, and calibrating this signal to provide an absolute saturation signal (SvidO2). The method is illustrated through its application to a video photoplethysmogram acquired during a porcine model of acute desaturation. The modular continuous wavelet transform-based approach is advocated by the author as a powerful methodology to deal with noisy, non-stationary biosignals in general. PMID:27382479

  7. [Detection of Weak Speech Signals from Strong Noise Background Based on Adaptive Stochastic Resonance].

    PubMed

    Lu, Huanhuan; Wang, Fuzhong; Zhang, Huichun

    2016-04-01

    Traditional speech detection methods regard the noise as a jamming signal to filter,but under the strong noise background,these methods lost part of the original speech signal while eliminating noise.Stochastic resonance can use noise energy to amplify the weak signal and suppress the noise.According to stochastic resonance theory,a new method based on adaptive stochastic resonance to extract weak speech signals is proposed.This method,combined with twice sampling,realizes the detection of weak speech signals from strong noise.The parameters of the systema,b are adjusted adaptively by evaluating the signal-to-noise ratio of the output signal,and then the weak speech signal is optimally detected.Experimental simulation analysis showed that under the background of strong noise,the output signal-to-noise ratio increased from the initial value-7dB to about 0.86 dB,with the gain of signalto-noise ratio is 7.86 dB.This method obviously raises the signal-to-noise ratio of the output speech signals,which gives a new idea to detect the weak speech signals in strong noise environment.

  8. Analysis of multicrystal pump–probe data sets. I. Expressions for the RATIO model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, Bertrand; Coppens, Philip

    2014-08-30

    The RATIO method in time-resolved crystallography [Coppenset al.(2009).J. Synchrotron Rad.16, 226–230] was developed for use with Laue pump–probe diffraction data to avoid complex corrections due to wavelength dependence of the intensities. The application of the RATIO method in processing/analysis prior to structure refinement requires an appropriate ratio model for modeling the light response. The assessment of the accuracy of pump–probe time-resolved structure refinements based on the observed ratios was discussed in a previous paper. In the current paper, a detailed ratio model is discussed, taking into account both geometric and thermal light-induced changes.

  9. Combining evidence using likelihood ratios in writer verification

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory

    2013-01-01

    Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.

  10. Flow Cytometric Human Leukocyte Antigen-B27 Typing with Stored Samples for Batch Testing

    PubMed Central

    Seo, Bo Young

    2013-01-01

    Background Flow cytometry (FC) HLA-B27 typing is still used extensively for the diagnosis of spondyloarthropathies. If patient blood samples are stored for a prolonged duration, this testing can be performed in a batch manner, and in-house cellular controls could easily be procured. In this study, we investigated various methods of storing patient blood samples. Methods We compared four storage methods: three methods of analyzing lymphocytes (whole blood stored at room temperature, frozen mononuclear cells, and frozen white blood cells [WBCs] after lysing red blood cells [RBCs]), and one method using frozen platelets (FPLT). We used three ratios associated with mean fluorescence intensities (MFI) for HLAB27 assignment: the B27 MFI ratio (sample/control) for HLA-B27 fluorescein-5-isothiocyanate (FITC); the B7 MFI ratio for HLA-B7 phycoerythrin (PE); and the ratio of these two ratios, B7/B27 ratio. Results Comparing the B27 MFI ratios of each storage method for the HLA-B27+ samples and the B7/B27 ratios for the HLA-B7+ samples revealed that FPLT was the best of the four methods. FPLT had a sensitivity of 100% and a specificity of 99.3% for HLA-B27 assignment in DNA-typed samples (N=164) when the two criteria, namely, B27 MFI ratio >4.0 and B7/B27 ratio <1.5, were used. Conclusions The FPLT method was found to offer a simple, economical, and accurate method of FC HLA-B27 typing by using stored patient samples. If stored samples are used, this method has the potential to replace the standard FC typing method when used in combination with a complementary DNA-based method. PMID:23667843

  11. The ratio method: A new tool to study one-neutron halo nuclei

    DOE PAGES

    Capel, Pierre; Johnson, R. C.; Nunes, F. M.

    2013-10-02

    Recently a new observable to study halo nuclei was introduced, based on the ratio between breakup and elastic angular cross sections. This new observable is shown by the analysis of specific reactions to be independent of the reaction mechanism and to provide nuclear-structure information of the projectile. Here we explore the details of this ratio method, including the sensitivity to binding energy and angular momentum of the projectile. We also study the reliability of the method with breakup energy. Lastly, we provide guidelines and specific examples for experimentalists who wish to apply this method.

  12. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  13. Improved neutron-gamma discrimination for a 3He neutron detector using subspace learning methods

    DOE PAGES

    Wang, C. L.; Funk, L. L.; Riedel, R. A.; ...

    2017-02-10

    3He gas based neutron linear-position-sensitive detectors (LPSDs) have been applied for many neutron scattering instruments. Traditional Pulse-Height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio on the orders of 10 5-10 6. The NGD ratios of 3He detectors need to be improved for even better scientific results from neutron scattering. Digital Signal Processing (DSP) analyses of waveforms were proposed for obtaining better NGD ratios, based on features extracted from rise-time, pulse amplitude, charge integration, a simplified Wiener filter, and the cross-correlation between individual and template waveforms of neutron and gamma events. Fisher linear discriminant analysis (FLDA)more » and three multivariate analyses (MVAs) of the features were performed. The NGD ratios are improved by about 10 2-10 3 times compared with the traditional PHA method. Finally, our results indicate the NGD capabilities of 3He tube detectors can be significantly improved with subspace-learning based methods, which may result in a reduced data-collection time and better data quality for further data reduction.« less

  14. Determination of water pH using absorption-based optical sensors: evaluation of different calculation methods

    NASA Astrophysics Data System (ADS)

    Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin

    2017-02-01

    Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.

  15. Combining tracer flux ratio methodology with low-flying aircraft measurements to estimate dairy farm CH4 emissions

    NASA Astrophysics Data System (ADS)

    Daube, C.; Conley, S.; Faloona, I. C.; Yacovitch, T. I.; Roscioli, J. R.; Morris, M.; Curry, J.; Arndt, C.; Herndon, S. C.

    2017-12-01

    Livestock activity, enteric fermentation of feed and anaerobic digestion of waste, contributes significantly to the methane budget of the United States (EPA, 2016). Studies question the reported magnitude of these methane sources (Miller et. al., 2013), calling for more detailed research of agricultural animals (Hristov, 2014). Tracer flux ratio is an attractive experimental method to bring to this problem because it does not rely on estimates of atmospheric dispersion. Collection of data occurred during one week at two dairy farms in central California (June, 2016). Each farm varied in size, layout, head count, and general operation. The tracer flux ratio method involves releasing ethane on-site with a known flow rate to serve as a tracer gas. Downwind mixed enhancements in ethane (from the tracer) and methane (from the dairy) were measured, and their ratio used to infer the unknown methane emission rate from the farm. An instrumented van drove transects downwind of each farm on public roads while tracer gases were released on-site, employing the tracer flux ratio methodology to assess simultaneous methane and tracer gas plumes. Flying circles around each farm, a small instrumented aircraft made measurements to perform a mass balance evaluation of methane gas. In the course of these two different methane quantification techniques, we were able to validate yet a third method: tracer flux ratio measured via aircraft. Ground-based tracer release rates were applied to the aircraft-observed methane-to-ethane ratios, yielding whole-site methane emission rates. Never before has the tracer flux ratio method been executed with aircraft measurements. Estimates from this new application closely resemble results from the standard ground-based technique to within their respective uncertainties. Incorporating this new dimension to the tracer flux ratio methodology provides additional context for local plume dynamics and validation of both ground and flight-based data.

  16. Rapid and high-resolution stable isotopic measurement of biogenic accretionary carbonate using an online CO2 laser ablation system: Standardization of the analytical protocol.

    PubMed

    Sreemany, Arpita; Bera, Melinda Kumar; Sarkar, Anindya

    2017-12-30

    The elaborate sampling and analytical protocol associated with conventional dual-inlet isotope ratio mass spectrometry has long hindered high-resolution climate studies from biogenic accretionary carbonates. Laser-based on-line systems, in comparison, produce rapid data, but suffer from unresolvable matrix effects. It is, therefore, necessary to resolve these matrix effects to take advantage of the automated laser-based method. Two marine bivalve shells (one aragonite and one calcite) and one fish otolith (aragonite) were first analysed using a CO 2 laser ablation system attached to a continuous flow isotope ratio mass spectrometer under different experimental conditions (different laser power, sample untreated vs vacuum roasted). The shells and the otolith were then micro-drilled and the isotopic compositions of the powders were measured in a dual-inlet isotope ratio mass spectrometer following the conventional acid digestion method. The vacuum-roasted samples (both aragonite and calcite) produced mean isotopic ratios (with a reproducibility of ±0.2 ‰ for both δ 18 O and δ 13 C values) almost identical to the values obtained using the conventional acid digestion method. As the isotopic ratio of the acid digested samples fall within the analytical precision (±0.2 ‰) of the laser ablation system, this suggests the usefulness of the method for studying the biogenic accretionary carbonate matrix. When using laser-based continuous flow isotope ratio mass spectrometry for the high-resolution isotopic measurements of biogenic carbonates, the employment of a vacuum-roasting step will reduce the matrix effect. This method will be of immense help to geologists and sclerochronologists in exploring short-term changes in climatic parameters (e.g. seasonality) in geological times. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Innovative spectrophotometric methods for simultaneous estimation of the novel two-drug combination: Sacubitril/Valsartan through two manipulation approaches and a comparative statistical study

    NASA Astrophysics Data System (ADS)

    Eissa, Maya S.; Abou Al Alamein, Amal M.

    2018-03-01

    Different innovative spectrophotometric methods were introduced for the first time for simultaneous quantification of sacubitril/valsartan in their binary mixture and in their combined dosage form without prior separation through two manipulation approaches. These approaches were developed and based either on two wavelength selection in zero-order absorption spectra namely; dual wavelength method (DWL) at 226 nm and 275 nm for valsartan, induced dual wavelength method (IDW) at 226 nm and 254 nm for sacubitril and advanced absorbance subtraction (AAS) based on their iso-absorptive point at 246 nm (λiso) and 261 nm (sacubitril shows equal absorbance values at the two selected wavelengths) or on ratio spectra using their normalized spectra namely; ratio difference spectrophotometric method (RD) at 225 nm and 264 nm for both of them in their ratio spectra, first derivative of ratio spectra (DR1) at 232 nm for valsartan and 239 nm for sacubitril and mean centering of ratio spectra (MCR) at 260 nm for both of them. Both sacubitril and valsartan showed linearity upon application of these methods in the range of 2.5-25.0 μg/mL. The developed spectrophotmetric methods were successfully applied to the analysis of their combined tablet dosage form ENTRESTO™. The adopted spectrophotometric methods were also validated according to ICH guidelines. The results obtained from the proposed methods were statistically compared to a reported HPLC method using Student t-test, F-test and a comparative study was also developed with one-way ANOVA, showing no statistical difference in accordance to precision and accuracy.

  18. New methods for the assessment of accommodative convergence.

    PubMed

    Asakawa, Ken; Ishikawa, Hitoshi; Shoji, Nobuyuki

    2009-01-01

    The authors introduced a new objective method for measuring horizontal eye movements based on the first Purkinje image with the use of infrared charge-coupled device (CCD) cameras and compared stimulus accommodative convergence to accommodation (AC/A) ratios as determined by a standard gradient method. The study included 20 patients, 5 to 9 years old, who had intermittent exotropia (10 eyes) and accommodative esotropia (10 eyes). Measurement of horizontal eye movements in millimeters (mm), based on the first Purkinje image, was obtained with a TriIRIS C9000 instrument (Hamamatsu Photonics K.K., Hamamatsu, Japan). The stimulus AC/A ratio was determined with the far gradient method. The average values of horizontal eye movements (mm) and eye deviation (Delta) (a) before and (b) after an accommodative stimulus of 3.00 diopters (D) were calculated with the following formula: horizontal eye movements (mm/D) and stimulus AC/A ratio (Delta/D) = (b - a)/3. The average values of the horizontal eye movements and the stimulus AC/A ratio were 0.5 mm/D and 3.8 Delta/D, respectively. Correlation analysis showed a strong positive correlation between these two parameters (r = 0.92). Moreover, horizontal eye movements are directly proportional to the AC/A ratio measured with the gradient method. The methods used in this study allow objective recordings of accommodative convergence to be obtained in many clinical situations. Copyright 2009, SLACK Incorporated.

  19. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater

    NASA Astrophysics Data System (ADS)

    Riad, Safaa M.; Salem, Hesham; Elbalkiny, Heba T.; Khattab, Fatma I.

    2015-04-01

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p = 0.05.

  20. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater.

    PubMed

    Riad, Safaa M; Salem, Hesham; Elbalkiny, Heba T; Khattab, Fatma I

    2015-04-05

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p=0.05. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. On the use of thick-airfoil theory to design airfoil families in which thickness and lift are varied independently

    NASA Technical Reports Server (NTRS)

    Barger, R. L.

    1974-01-01

    A method has been developed for designing families of airfoils in which the members of a family have the same basic type of pressure distribution but vary in thickness ratio or lift, or both. Thickness ratio and lift may be prescribed independently. The method which is based on the Theodorsen thick-airfoil theory permits moderate variations from the basic shape on which the family is based.

  2. Two smart spectrophotometric methods for the simultaneous estimation of Simvastatin and Ezetimibe in combined dosage form

    NASA Astrophysics Data System (ADS)

    Magdy, Nancy; Ayad, Miriam F.

    2015-02-01

    Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.

  3. Contrasting RCC, RVU, and ABC for managed care decisions. A case study compares three widely used costing methods and finds one superior.

    PubMed

    West, T D; Balas, E A; West, D A

    1996-08-01

    To obtain cost data needed to improve managed care decisions and negotiate profitable capitation contracts, most healthcare provider organizations use one of three costing methods: the ratio-of-costs-to-charges method, the relative value unit method, or the activity-based costing method. Although the ratio-of-costs to charges is used by a majority of provider organizations, a case study that applied these three methods in a renal dialysis clinic found that the activity-based costing method provided the most accurate cost data. By using this costing method, healthcare financial managers can obtain the data needed to make optimal decisions regarding resource allocation and cost containment, thus assuring the longterm financial viability of their organizations.

  4. A theoretically based determination of bowen-ratio fetch requirements

    USGS Publications Warehouse

    Stannard, D.I.

    1997-01-01

    Determination of fetch requirements for accurate Bowen-ratio measurements of latent- and sensible-heat fluxes is more involved than for eddy-correlation measurements because Bowen-ratio sensors are located at two heights, rather than just one. A simple solution to the diffusion equation is used to derive an expression for Bowen-ratio fetch requirements, downwind of a step change in surface fluxes. These requirements are then compared to eddy-correlation fetch requirements based on the same diffusion equation solution. When the eddy-correlation and upper Bowen-ratio sensor heights are equal, and the available energy upwind and downwind of the step change is constant, the Bowen-ratio method requires less fetch than does eddy correlation. Differences in fetch requirements between the two methods are greatest over relatively smooth surfaces. Bowen-ratio fetch can be reduced significantly by lowering the lower sensor, as well as the upper sensor. The Bowen-ratio fetch model was tested using data from a field experiment where multiple Bowen-ratio systems were deployed simultaneously at various fetches and heights above a field of bermudagrass. Initial comparisons were poor, but improved greatly when the model was modified (and operated numerically) to account for the large roughness of the upwind cotton field.

  5. The adjusting factor method for weight-scaling truckloads of mixed hardwood sawlogs

    Treesearch

    Edward L. Adams

    1976-01-01

    A new method of weight-scaling truckloads of mixed hardwood sawlogs systematically adjusts for changes in the weight/volume ratio of logs coming into a sawmill. It uses a conversion factor based on the running average of weight/volume ratios of randomly selected sample loads. A test of the method indicated that over a period of time the weight-scaled volume should...

  6. Mean centering of double divisor ratio spectra, a novel spectrophotometric method for analysis of ternary mixtures

    NASA Astrophysics Data System (ADS)

    Hassan, Said A.; Elzanfaly, Eman S.; Salem, Maissa Y.; El-Zeany, Badr A.

    2016-01-01

    A novel spectrophotometric method was developed for determination of ternary mixtures without previous separation, showing significant advantages over conventional methods. The new method is based on mean centering of double divisor ratio spectra. The mathematical explanation of the procedure is illustrated. The method was evaluated by determination of model ternary mixture and by the determination of Amlodipine (AML), Aliskiren (ALI) and Hydrochlorothiazide (HCT) in laboratory prepared mixtures and in a commercial pharmaceutical preparation. For proper presentation of the advantages and applicability of the new method, a comparative study was established between the new mean centering of double divisor ratio spectra (MCDD) and two similar methods used for analysis of ternary mixtures, namely mean centering (MC) and double divisor of ratio spectra-derivative spectrophotometry (DDRS-DS). The method was also compared with a reported one for analysis of the pharmaceutical preparation. The method was validated according to the ICH guidelines and accuracy, precision, repeatability and robustness were found to be within the acceptable limits.

  7. Extinction-ratio-independent electrical method for measuring chirp parameters of Mach-Zehnder modulators using frequency-shifted heterodyne.

    PubMed

    Zhang, Shangjian; Wang, Heng; Zou, Xinhai; Zhang, Yali; Lu, Rongguo; Liu, Yong

    2015-06-15

    An extinction-ratio-independent electrical method is proposed for measuring chirp parameters of Mach-Zehnder electric-optic intensity modulators based on frequency-shifted optical heterodyne. The method utilizes the electrical spectrum analysis of the heterodyne products between the intensity modulated optical signal and the frequency-shifted optical carrier, and achieves the intrinsic chirp parameters measurement at microwave region with high-frequency resolution and wide-frequency range for the Mach-Zehnder modulator with a finite extinction ratio. Moreover, the proposed method avoids calibrating the responsivity fluctuation of the photodiode in spite of the involved photodetection. Chirp parameters as a function of modulation frequency are experimentally measured and compared to those with the conventional optical spectrum analysis method. Our method enables an extinction-ratio-independent and calibration-free electrical measurement of Mach-Zehnder intensity modulators by using the high-resolution frequency-shifted heterodyne technique.

  8. Establishment of a Method for Measuring Antioxidant Capacity in Urine, Based on Oxidation Reduction Potential and Redox Couple I2/KI

    PubMed Central

    Cao, Tinghui; He, Min; Bai, Tianyu

    2016-01-01

    Objectives. To establish a new method for determination of antioxidant capacity of human urine based on the redox couple I2/KI and to evaluate the redox status of healthy and diseased individuals. Methods. The method was based on the linear relationship between oxidation reduction potential (ORP) and logarithm of concentration ratio of I2/KI. ORP of a solution with a known concentration ratio of I2/KI will change when reacted with urine. To determine the accuracy of the method, both vitamin C and urine were reacted separately with I2/KI solution. The new method was compared with the traditional method of iodine titration and then used to measure the antioxidant capacity of urine samples from 30 diabetic patients and 30 healthy subjects. Results. A linear relationship was found between logarithm of concentration ratio of I2/KI and ORP (R 2 = 0.998). Both vitamin C and urine concentration showed a linear relationship with ORP (R 2 = 0.994 and 0.986, resp.). The precision of the method was in the acceptable range and results of two methods had a linear correlation (R 2 = 0.987). Differences in ORP values between diabetic group and control group were statistically significant (P < 0.05). Conclusions. A new method for measuring the antioxidant capacity of clinical urine has been established. PMID:28115919

  9. Comparison and applicability of landslide susceptibility models based on landslide ratio-based logistic regression, frequency ratio, weight of evidence, and instability index methods in an extreme rainfall event

    NASA Astrophysics Data System (ADS)

    Wu, Chunhung

    2016-04-01

    Few researches have discussed about the applicability of applying the statistical landslide susceptibility (LS) model for extreme rainfall-induced landslide events. The researches focuses on the comparison and applicability of LS models based on four methods, including landslide ratio-based logistic regression (LRBLR), frequency ratio (FR), weight of evidence (WOE), and instability index (II) methods, in an extreme rainfall-induced landslide cases. The landslide inventory in the Chishan river watershed, Southwestern Taiwan, after 2009 Typhoon Morakot is the main materials in this research. The Chishan river watershed is a tributary watershed of Kaoping river watershed, which is a landslide- and erosion-prone watershed with the annual average suspended load of 3.6×107 MT/yr (ranks 11th in the world). Typhoon Morakot struck Southern Taiwan from Aug. 6-10 in 2009 and dumped nearly 2,000 mm of rainfall in the Chishan river watershed. The 24-hour, 48-hour, and 72-hours accumulated rainfall in the Chishan river watershed exceeded the 200-year return period accumulated rainfall. 2,389 landslide polygons in the Chishan river watershed were extracted from SPOT 5 images after 2009 Typhoon Morakot. The total landslide area is around 33.5 km2, equals to the landslide ratio of 4.1%. The main landslide types based on Varnes' (1978) classification are rotational and translational slides. The two characteristics of extreme rainfall-induced landslide event are dense landslide distribution and large occupation of downslope landslide areas owing to headward erosion and bank erosion in the flooding processes. The area of downslope landslide in the Chishan river watershed after 2009 Typhoon Morakot is 3.2 times higher than that of upslope landslide areas. The prediction accuracy of LS models based on LRBLR, FR, WOE, and II methods have been proven over 70%. The model performance and applicability of four models in a landslide-prone watershed with dense distribution of rainfall-induced landslide are interesting and meaningful. Eight landslide-related factors, including elevation, slope, aspect, geology, accumulated rainfall during 2009 Typhoon Morakot, landuse, distance to the fault, and distance to the rivers, were considered in this research. The research builds and compares the difference of the LS maps based on four methods. The average LS value from each method is 0.27 for LRBLR, 0.368 for FR, 0.553 for WOE, and 0.498 for II. The correlation analysis was conducted to identify similarities between the four LS maps. The correlation coefficients are 0.913, 0.829, 0.930, 0.756, 0.729, and 0.652 for the LRBLR vs FR, LRBLR vs WOE, FR vs WOE, LRBLR vs II, FR vs II, and WOE vs II. The research compares the model performance of four LS maps by calculating the AUC value (area under the ROC curve) and ACR value (average correct-predicted ratio). The AUC values of LS maps based on LRBLR, FR, WOE, and II methods are 0.819, 0.819, 0.822 and 0.785. The ACR values of LS maps based on LRBLR, FR, WOE, and II methods are 75.1%, 73.7%, 68.4%, and 64.2%. The results indicate that the model performance based on LRBLR method in an extreme rainfall-landslide event is better than that based on the other three methods.

  10. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    PubMed

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  11. Determination of dye/protein ratios in a labeling reaction between a cyanine dye and bovine serum albumin by micellar electrokinetic chromatography using a diode laser-induced fluorescence detection.

    PubMed

    Jing, Peng; Kaneta, Takashi; Imasaka, Totaro

    2002-08-01

    The degree of labeling, i.e., dye/protein ratio (D/P) is important for characterizing properties of dye labeling with proteins. A method for the determination of this ratio between a fluorescent cyanine dye and bovine serum albumin (BSA), based on the separation of the labeling mixture using micellar electrokinetic chromatography with diode laser-induced fluorescence detection, is described. Two methods for the determination of D/P were examined in this study. In these methods, a hydrolysis product and impurities, which are usually unfavorable compounds that are best excluded for protein analysis, were utilized to determine the amounts of dye bound to BSA. One is a direct method in which a ratio of the peak area of BSA to the total peak area of all the products produced in the labeling reaction was used for determining the average number of dye molecules bound to a single BSA molecule. The other is an indirect determination, which is based on diminution of all peak areas related to the products except for the labeled BSA. These methods were directly compared by means of a spectrophotometric method. The experimental results show that the indirect method is both reliable and sensitive. Therefore, D/P values can be determined at trace levels using the indirect method.

  12. Innovative spectrophotometric methods for simultaneous estimation of the novel two-drug combination: Sacubitril/Valsartan through two manipulation approaches and a comparative statistical study.

    PubMed

    Eissa, Maya S; Abou Al Alamein, Amal M

    2018-03-15

    Different innovative spectrophotometric methods were introduced for the first time for simultaneous quantification of sacubitril/valsartan in their binary mixture and in their combined dosage form without prior separation through two manipulation approaches. These approaches were developed and based either on two wavelength selection in zero-order absorption spectra namely; dual wavelength method (DWL) at 226nm and 275nm for valsartan, induced dual wavelength method (IDW) at 226nm and 254nm for sacubitril and advanced absorbance subtraction (AAS) based on their iso-absorptive point at 246nm (λ iso ) and 261nm (sacubitril shows equal absorbance values at the two selected wavelengths) or on ratio spectra using their normalized spectra namely; ratio difference spectrophotometric method (RD) at 225nm and 264nm for both of them in their ratio spectra, first derivative of ratio spectra (DR 1 ) at 232nm for valsartan and 239nm for sacubitril and mean centering of ratio spectra (MCR) at 260nm for both of them. Both sacubitril and valsartan showed linearity upon application of these methods in the range of 2.5-25.0μg/mL. The developed spectrophotmetric methods were successfully applied to the analysis of their combined tablet dosage form ENTRESTO™. The adopted spectrophotometric methods were also validated according to ICH guidelines. The results obtained from the proposed methods were statistically compared to a reported HPLC method using Student t-test, F-test and a comparative study was also developed with one-way ANOVA, showing no statistical difference in accordance to precision and accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Four Derivative Spectrophotometric Methods for the Simultaneous Determination of Carmoisine and Ponceau 4R in Drinks and Comparison with High Performance Liquid Chromatography

    PubMed Central

    Turak, Fatma; Dinç, Mithat; Dülger, Öznur; Özgür, Mahmure Ustun

    2014-01-01

    Four simple, rapid, and accurate spectrophotometric methods were developed for the simultaneous determination of two food colorants, Carmoisine (E122) and Ponceau 4R (E124), in their binary mixtures and soft drinks. The first method is based on recording the first derivative curves and determining each component using the zero-crossing technique. The second method uses the first derivative of ratio spectra. The ratio spectra are obtained by dividing the absorption spectra of the binary mixture by that of one of the components. The third method, derivative differential procedure, is based on the measurement of difference absorptivities derivatized in first order of solution of drink samples in 0,1 N NaOH relative to that of an equimolar solution in 0,1 N HCl at wavelengths of 366 and 451 nm for Carmoisine and Ponceau 4R, respectively. The last method, based on the compensation method is presented for derivative spectrophotometric determination of E122 and E124 mixtures with overlapping spectra. By using ratios of the derivative maxima, the exact compensation of either component in the mixture can be achieved, followed by its determination. These proposed methods have been successfully applied to the binary mixtures and soft drinks and the results were statistically compared with the reference HPLC method (NMKL 130). PMID:24672549

  14. Techniques and results of nongame bird monitoring in North America

    USGS Publications Warehouse

    Robbins, C.S.; Bystrak, D.; Geissler, P.H.; Oelke, H.

    1980-01-01

    Long-term bird population trends based on accumulated ratios (proportional change) sometimes give a very misleading view of population change. Alternate methods of representing population change, based on the weighted means for the individual years, avoid the dangers of using ratios. Some advantages and disadvantages of various weighting techniques are discussed.

  15. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.

  16. Inlet noise suppressor design method based upon the distribution of acoustic power with mode cutoff ratio

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1976-01-01

    A liner design for noise suppressors with outer wall treatment such as in an engine inlet is presented which potentially circumvents the problems of resolution in modal measurement. The method is based on the fact that the modal optimum impedance and the maximum possible sound power attenuation at this optimum can be expressed as functions of cutoff ratio alone. Modes with similar cutoff ratios propagate similarly in the duct and in addition propagate similarly to the far field. Thus there is no need to determine the acoustic power carried by these modes individually, and they can be grouped together as one entity. With the optimum impedance and maximum attenuation specified as functions of cutoff ratio, the off-optimum liner performance can be estimated using an approximate attenuation equation.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, C. L.; Funk, L. L.; Riedel, R. A.

    3He gas based neutron linear-position-sensitive detectors (LPSDs) have been applied for many neutron scattering instruments. Traditional Pulse-Height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio on the orders of 10 5-10 6. The NGD ratios of 3He detectors need to be improved for even better scientific results from neutron scattering. Digital Signal Processing (DSP) analyses of waveforms were proposed for obtaining better NGD ratios, based on features extracted from rise-time, pulse amplitude, charge integration, a simplified Wiener filter, and the cross-correlation between individual and template waveforms of neutron and gamma events. Fisher linear discriminant analysis (FLDA)more » and three multivariate analyses (MVAs) of the features were performed. The NGD ratios are improved by about 10 2-10 3 times compared with the traditional PHA method. Finally, our results indicate the NGD capabilities of 3He tube detectors can be significantly improved with subspace-learning based methods, which may result in a reduced data-collection time and better data quality for further data reduction.« less

  18. Determining the multi-scale hedge ratios of stock index futures using the lower partial moments method

    NASA Astrophysics Data System (ADS)

    Dai, Jun; Zhou, Haigang; Zhao, Shaoquan

    2017-01-01

    This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.

  19. A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures

    NASA Astrophysics Data System (ADS)

    Youssef, Rasha M.; Maher, Hadir M.

    2008-10-01

    A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.

  20. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  1. Multi-Frequency Signal Detection Based on Frequency Exchange and Re-Scaling Stochastic Resonance and Its Application to Weak Fault Diagnosis.

    PubMed

    Liu, Jinjun; Leng, Yonggang; Lai, Zhihui; Fan, Shengbo

    2018-04-25

    Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method.

  2. Contact angle adjustment in equation-of-state-based pseudopotential model

    NASA Astrophysics Data System (ADS)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  3. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  4. Using stable isotopes to monitor forms of sulfur during desulfurization processes: A quick screening method

    USGS Publications Warehouse

    Liu, Chao-Li; Hackley, Keith C.; Coleman, D.D.; Kruse, C.W.

    1987-01-01

    A method using stable isotope ratio analysis to monitor the reactivity of sulfur forms in coal during thermal and chemical desulfurization processes has been developed at the Illinois State Geological Survey. The method is based upon the fact that a significant difference exists in some coals between the 34S/32S ratios of the pyritic and organic sulfur. A screening method for determining the suitability of coal samples for use in isotope ratio analysis is described. Making these special coals available from coal sample programs would assist research groups in sorting out the complex sulfur chemistry which accompanies thermal and chemical processing of high sulfur coals. ?? 1987.

  5. Cellulose I crystallinity determination using FT-Raman spectroscopy : univariate and multivariate methods

    Treesearch

    Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph

    2010-01-01

    Two new methods based on FT–Raman spectroscopy, one simple, based on band intensity ratio, and the other using a partial least squares (PLS) regression model, are proposed to determine cellulose I crystallinity. In the simple method, crystallinity in cellulose I samples was determined based on univariate regression that was first developed using the Raman band...

  6. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2010-01-01

    When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's

  7. Comprehensive analysis of yeast metabolite GC x GC-TOFMS data: combining discovery-mode and deconvolution chemometric software.

    PubMed

    Mohler, Rachel E; Dombek, Kenneth M; Hoggard, Jamin C; Pierce, Karisa M; Young, Elton T; Synovec, Robert E

    2007-08-01

    The first extensive study of yeast metabolite GC x GC-TOFMS data from cells grown under fermenting, R, and respiring, DR, conditions is reported. In this study, recently developed chemometric software for use with three-dimensional instrumentation data was implemented, using a statistically-based Fisher ratio method. The Fisher ratio method is fully automated and will rapidly reduce the data to pinpoint two-dimensional chromatographic peaks differentiating sample types while utilizing all the mass channels. The effect of lowering the Fisher ratio threshold on peak identification was studied. At the lowest threshold (just above the noise level), 73 metabolite peaks were identified, nearly three-fold greater than the number of previously reported metabolite peaks identified (26). In addition to the 73 identified metabolites, 81 unknown metabolites were also located. A Parallel Factor Analysis graphical user interface (PARAFAC GUI) was applied to selected mass channels to obtain a concentration ratio, for each metabolite under the two growth conditions. Of the 73 known metabolites identified by the Fisher ratio method, 54 were statistically changing to the 95% confidence limit between the DR and R conditions according to the rigorous Student's t-test. PARAFAC determined the concentration ratio and provided a fully-deconvoluted (i.e. mathematically resolved) mass spectrum for each of the metabolites. The combination of the Fisher ratio method with the PARAFAC GUI provides high-throughput software for discovery-based metabolomics research, and is novel for GC x GC-TOFMS data due to the use of the entire data set in the analysis (640 MB x 70 runs, double precision floating point).

  8. Fully automated atlas-based method for prescribing 3D PRESS MR spectroscopic imaging: Toward robust and reproducible metabolite measurements in human brain.

    PubMed

    Bian, Wei; Li, Yan; Crane, Jason C; Nelson, Sarah J

    2018-02-01

    To implement a fully automated atlas-based method for prescribing 3D PRESS MR spectroscopic imaging (MRSI). The PRESS selected volume and outer-volume suppression bands were predefined on the MNI152 standard template image. The template image was aligned to the subject T 1 -weighted image during a scan, and the resulting transformation was then applied to the predefined prescription. To evaluate the method, H-1 MRSI data were obtained in repeat scan sessions from 20 healthy volunteers. In each session, datasets were acquired twice without repositioning. The overlap ratio of the prescribed volume in the two sessions was calculated and the reproducibility of inter- and intrasession metabolite peak height and area ratios was measured by the coefficient of variation (CoV). The CoVs from intra- and intersession were compared by a paired t-test. The average overlap ratio of the automatically prescribed selection volumes between two sessions was 97.8%. The average voxel-based intersession CoVs were less than 0.124 and 0.163 for peak height and area ratios, respectively. Paired t-test showed no significant difference between the intra- and intersession CoVs. The proposed method provides a time efficient method to prescribe 3D PRESS MRSI with reproducible imaging positioning and metabolite measurements. Magn Reson Med 79:636-642, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Detection of testosterone administration based on the carbon isotope ratio profiling of endogenous steroids: international reference populations of professional soccer players

    PubMed Central

    Strahm, E; Emery, C; Saugy, M; Dvorak, J; Saudan, C

    2009-01-01

    Background and objectives: The determination of the carbon isotope ratio in androgen metabolites has been previously shown to be a reliable, direct method to detect testosterone misuse in the context of antidoping testing. Here, the variability in the 13C/12C ratios in urinary steroids in a widely heterogeneous cohort of professional soccer players residing in different countries (Argentina, Italy, Japan, South Africa, Switzerland and Uganda) is examined. Methods: Carbon isotope ratios of selected androgens in urine specimens were determined using gas chromatography/combustion/isotope ratio mass spectrometry (GC-C-IRMS). Results: Urinary steroids in Italian and Swiss populations were found to be enriched in 13C relative to other groups, reflecting higher consumption of C3 plants in these two countries. Importantly, detection criteria based on the difference in the carbon isotope ratio of androsterone and pregnanediol for each population were found to be well below the established threshold value for positive cases. Conclusions: The results obtained with the tested diet groups highlight the importance of adapting the criteria if one wishes to increase the sensitivity of exogenous testosterone detection. In addition, confirmatory tests might be rendered more efficient by combining isotope ratio mass spectrometry with refined interpretation criteria for positivity and subject-based profiling of steroids. PMID:19549614

  10. An improved algorithm of mask image dodging for aerial image

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi

    2011-12-01

    The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.

  11. An Alternative Method for Computing Unit Costs and Productivity Ratios. AIR 1984 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Winstead, Wayland H.; And Others

    An alternative measure for evaluating the performance of academic departments was studied. A comparison was made with the traditional manner for computing unit costs and productivity ratios: prorating the salary and effort of each faculty member to each course level based on the personal mix of course taught. The alternative method used averaging…

  12. Improved neutron-gamma discrimination for a {sup 6}Li-glass neutron detector using digital signal analysis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, C. L., E-mail: wangc@ornl.gov; Riedel, R. A.

    2016-01-15

    A {sup 6}Li-glass scintillator (GS20) based neutron Anger camera was developed for time-of-flight single-crystal diffraction instruments at Spallation Neutron Source. Traditional Pulse-Height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio (defined as NGD ratio) on the order of 10{sup 4}. The NGD ratios of Anger cameras need to be improved for broader applications including neutron reflectometers. For this purpose, six digital signal analysis methods of individual waveforms acquired from photomultiplier tubes were proposed using (i) charge integration, (ii) pulse-amplitude histograms, (iii) power spectrum analysis combined with the maximum pulse-amplitude, (iv) two event parameters (a{sub 1}, b{submore » 0}) obtained from a Wiener filter, (v) an effective amplitude (m) obtained from an adaptive least-mean-square filter, and (vi) a cross-correlation coefficient between individual and reference waveforms. The NGD ratios are about 70 times those from the traditional PHA method. Our results indicate the NGD capabilities of neutron Anger cameras based on GS20 scintillators can be significantly improved with digital signal analysis methods.« less

  13. Sense and Avoid Safety Analysis for Remotely Operated Unmanned Aircraft in the National Airspace System. Version 5

    NASA Technical Reports Server (NTRS)

    Carreno, Victor

    2006-01-01

    This document describes a method to demonstrate that a UAS, operating in the NAS, can avoid collisions with an equivalent level of safety compared to a manned aircraft. The method is based on the calculation of a collision probability for a UAS , the calculation of a collision probability for a base line manned aircraft, and the calculation of a risk ratio given by: Risk Ratio = P(collision_UAS)/P(collision_manned). A UAS will achieve an equivalent level of safety for collision risk if the Risk Ratio is less than or equal to one. Calculation of the probability of collision for UAS and manned aircraft is accomplished through event/fault trees.

  14. Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation.

    PubMed

    Sim, K S; Norhisham, S

    2016-11-01

    A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  15. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  16. Ratioing methods for in-flight response calibration of space-based spectro-radiometers, operating in the solar spectral region

    NASA Astrophysics Data System (ADS)

    Lobb, Dan

    2017-11-01

    One of the most significant problems for space-based spectro-radiometer systems, observing Earth from space in the solar spectral band (UV through short-wave IR), is in achievement of the required absolute radiometric accuracy. Classical methods, for example using one or more sun-illuminated diffusers as reflectance standards, do not generally provide methods for monitoring degradation of the in-flight reference after pre-flight characterisation. Ratioing methods have been proposed that provide monitoring of degradation of solar attenuators in flight, thus in principle allowing much higher confidence in absolute response calibration. Two example methods are described. It is shown that systems can be designed for relatively low size and without significant additions to the complexity of flight hardware.

  17. Spiking cortical model based non-local means method for despeckling multiframe optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Gu, Yameng; Zhang, Xuming

    2017-05-01

    Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).

  18. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  19. n-Gram-Based Text Compression.

    PubMed

    Nguyen, Vu H; Nguyen, Hien T; Duong, Hieu N; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n -gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n -grams and then encodes them based on n -gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n -gram is encoded by two to four bytes accordingly based on its corresponding n -gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n -gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  20. n-Gram-Based Text Compression

    PubMed Central

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  1. Spectrophotometric simultaneous determination of Rabeprazole Sodium and Itopride Hydrochloride in capsule dosage form

    NASA Astrophysics Data System (ADS)

    Sabnis, Shweta S.; Dhavale, Nilesh D.; Jadhav, Vijay. Y.; Gandhi, Santosh V.

    2008-03-01

    A new simple, economical, rapid, precise and accurate method for simultaneous determination of rabeprazole sodium and itopride hydrochloride in capsule dosage form has been developed. The method is based on ratio spectra derivative spectrophotometry. The amplitudes in the first derivative of the corresponding ratio spectra at 231 nm (minima) and 260 nm were selected to determine rabeprazole sodium and itopride hydrochloride, respectively. The method was validated with respect to linearity, precision and accuracy.

  2. Spectrophotometric simultaneous determination of rabeprazole sodium and itopride hydrochloride in capsule dosage form.

    PubMed

    Sabnis, Shweta S; Dhavale, Nilesh D; Jadhav, Vijay Y; Gandhi, Santosh V

    2008-03-01

    A new simple, economical, rapid, precise and accurate method for simultaneous determination of rabeprazole sodium and itopride hydrochloride in capsule dosage form has been developed. The method is based on ratio spectra derivative spectrophotometry. The amplitudes in the first derivative of the corresponding ratio spectra at 231nm (minima) and 260nm were selected to determine rabeprazole sodium and itopride hydrochloride, respectively. The method was validated with respect to linearity, precision and accuracy.

  3. Invariant Imbedded T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios

    NASA Technical Reports Server (NTRS)

    Pelissier, Craig; Kuo, Kwo-Sen; Clune, Thomas; Adams, Ian; Munchak, Stephen

    2017-01-01

    The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM IITM+SOV software to the community under an open source license.

  4. Invariant Imbedding T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios

    NASA Astrophysics Data System (ADS)

    Pelissier, C.; Clune, T.; Kuo, K. S.; Munchak, S. J.; Adams, I. S.

    2017-12-01

    The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM & IITM+SOV software to the community under an open source license.

  5. Analytical and experimental study of axisymmetric truncated plug nozzle flow fields

    NASA Technical Reports Server (NTRS)

    Muller, T. J.; Sule, W. P.; Fanning, A. E.; Giel, T. V.; Galanga, F. L.

    1972-01-01

    Experimental and analytical investigation of the flow field and base pressure of internal-external-expansion truncated plug nozzles are discussed. Experimental results for two axisymmetric, conical plug-cylindrical shroud, truncated plug nozzles are presented for both open and closed wake operations. These results include extensive optical and pressure data covering nozzle flow field and base pressure characteristics, diffuser effects, lip shock strength, Mach disc behaviour, and the recompression and reverse flow regions. Transonic experiments for a special planar transonic section are presented. An extension of the analytical method of Hall and Mueller to include the internal shock wave from the shroud exit is presented for closed wake operation. Results of this analysis include effects on the flow field and base pressure of ambient pressure ratio, nozzle geometry, and the ratio of specific heats. Static thrust is presented as a function of ambient pressure ratio and nozzle geometry. A new transonic solution method is also presented.

  6. Three-dimensional, position-sensitive radiation detection

    DOEpatents

    He, Zhong; Zhang, Feng

    2010-04-06

    Disclosed herein is a method of determining a characteristic of radiation detected by a radiation detector via a multiple-pixel event having a plurality of radiation interactions. The method includes determining a cathode-to-anode signal ratio for a selected interaction of the plurality of radiation interactions based on electron drift time data for the selected interaction, and determining the radiation characteristic for the multiple-pixel event based on both the cathode-to-anode signal ratio and the electron drift time data. In some embodiments, the method further includes determining a correction factor for the radiation characteristic based on an interaction depth of the plurality of radiation interactions, a lateral distance between the selected interaction and a further interaction of the plurality of radiation interactions, and the lateral positioning of the plurality of radiation interactions.

  7. Internal performance characteristics of short convergent-divergent exhaust nozzles designed by the method of characteristics

    NASA Technical Reports Server (NTRS)

    Krull, H George; Beale, William T

    1956-01-01

    Internal performance data on a short exhaust nozzle designed by the method of characteristics were obtained over a range of pressure ratios from 1.5 to 22. The peak thrust coefficient was not affected by a shortened divergent section, but it occurred at lower pressure ratios due to reduction in expansion ratio. This nozzle contour based on characteristics solution gave higher thrust coefficients than a conical convergent-divergent nozzle of equivalent length. Abrupt-inlet sections permitted a reduction in nozzle length without a thrust-coefficient reduction.

  8. Combining matched and unmatched control groups in case-control studies.

    PubMed

    le Cessie, Saskia; Nagelkerke, Nico; Rosendaal, Frits R; van Stralen, Karlijn J; Pomp, Elisabeth R; van Houwelingen, Hans C

    2008-11-15

    Multiple control groups in case-control studies are used to control for different sources of confounding. For example, cases can be contrasted with matched controls to adjust for multiple genetic or unknown lifestyle factors and simultaneously contrasted with an unmatched population-based control group. Inclusion of different control groups for a single exposure analysis yields several estimates of the odds ratio, all using only part of the data. Here the authors introduce an easy way to combine odds ratios from several case-control analyses with the same cases. The approach is based upon methods used for meta-analysis but takes into account the fact that the same cases are used and that the estimated odds ratios are therefore correlated. Two ways of estimating this correlation are discussed: sandwich methodology and the bootstrap. Confidence intervals for the pooled estimates and a test for checking whether the odds ratios in the separate case-control studies differ significantly are derived. The performance of the method is studied by simulation and by applying the methods to a large study on risk factors for thrombosis, the MEGA Study (1999-2004), wherein cases with first venous thrombosis were included with a matched control group of partners and an unmatched population-based control group.

  9. Estimation of S/G ratio in woods using 1064 nm FT-Raman spectroscopy

    Treesearch

    Umesh P. Agarwal; Sally A. Ralph; Dharshana Padmakshan; Sarah Liu; Steven D. Karlen; Cliff Foster; John Ralph

    2015-01-01

    Two simple methods based on the 370 cm-1 Raman band intensity were developed for estimation of syringyl-to-guaiacyl (S/G) ratio in woods. The methods, in principle, are representative of the whole cell wall lignin and not just the portion of lignin that gets cleaved to release monomers, for example, during certain S/G chemical analyses. As such,...

  10. A more powerful test based on ratio distribution for retention noninferiority hypothesis.

    PubMed

    Deng, Ling; Chen, Gang

    2013-03-11

    Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.

  11. Robust feature matching via support-line voting and affine-invariant ratios

    NASA Astrophysics Data System (ADS)

    Li, Jiayuan; Hu, Qingwu; Ai, Mingyao; Zhong, Ruofei

    2017-10-01

    Robust image matching is crucial for many applications of remote sensing and photogrammetry, such as image fusion, image registration, and change detection. In this paper, we propose a robust feature matching method based on support-line voting and affine-invariant ratios. We first use popular feature matching algorithms, such as SIFT, to obtain a set of initial matches. A support-line descriptor based on multiple adaptive binning gradient histograms is subsequently applied in the support-line voting stage to filter outliers. In addition, we use affine-invariant ratios computed by a two-line structure to refine the matching results and estimate the local affine transformation. The local affine model is more robust to distortions caused by elevation differences than the global affine transformation, especially for high-resolution remote sensing images and UAV images. Thus, the proposed method is suitable for both rigid and non-rigid image matching problems. Finally, we extract as many high-precision correspondences as possible based on the local affine extension and build a grid-wise affine model for remote sensing image registration. We compare the proposed method with six state-of-the-art algorithms on several data sets and show that our method significantly outperforms the other methods. The proposed method achieves 94.46% average precision on 15 challenging remote sensing image pairs, while the second-best method, RANSAC, only achieves 70.3%. In addition, the number of detected correct matches of the proposed method is approximately four times the number of initial SIFT matches.

  12. Proportional-scanning-phase method to suppress the vibrational noise in nonisotope dual-atom-interferometer-based weak-equivalence-principle-test experiments

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhong, Jiaqi; Song, Hongwei; Zhu, Lei; Wang, Jin; Zhan, Mingsheng

    2014-08-01

    Vibrational noise is one of the most important noises that limits the performance of the nonisotopes atom-interferometers (AIs) -based weak-equivalence-principle (WEP) -test experiment. By analyzing the vibration-induced phases, we find that, although the induced phases are not completely common, their ratio is always a constant at every experimental data point, which is not fully utilized in the traditional elliptic curve-fitting method. From this point, we propose a strategy that can greatly suppress the vibration-induced phase noise by stabilizing the Raman laser frequencies at high precision and controlling the scanning-phase ratio. The noise rejection ratio can be as high as 1015 with arbitrary dual-species AIs. Our method provides a Lissajous curve, and the shape of the curve indicates the breakdown of the weak-equivalence-principle signal. Then we manage to derive an estimator for the differential phase of the Lissajous curve. This strategy could be helpful in extending the candidates of atomic species for high-precision AIs-based WEP-test experiments.

  13. Four in vivo g-ratio-weighted imaging methods: Comparability and repeatability at the group level.

    PubMed

    Ellerbrock, Isabel; Mohammadi, Siawoosh

    2018-01-01

    A recent method, denoted in vivo g-ratio-weighted imaging, has related the microscopic g-ratio, only accessible by ex vivo histology, to noninvasive MRI markers for the fiber volume fraction (FVF) and myelin volume fraction (MVF). Different MRI markers have been proposed for g-ratio weighted imaging, leaving open the question which combination of imaging markers is optimal. To address this question, the repeatability and comparability of four g-ratio methods based on different combinations of, respectively, two imaging markers for FVF (tract-fiber density, TFD, and neurite orientation dispersion and density imaging, NODDI) and two imaging markers for MVF (magnetization transfer saturation rate, MT, and, from proton density maps, macromolecular tissue volume, MTV) were tested in a scan-rescan experiment in two groups. Moreover, it was tested how the repeatability and comparability were affected by two key processing steps, namely the masking of unreliable voxels (e.g., due to partial volume effects) at the group level and the calibration value used to link MRI markers to MVF (and FVF). Our data showed that repeatability and comparability depend largely on the marker for the FVF (NODDI outperformed TFD), and that they were improved by masking. Overall, the g-ratio method based on NODDI and MT showed the highest repeatability (90%) and lowest variability between groups (3.5%). Finally, our results indicate that the calibration procedure is crucial, for example, calibration to a lower g-ratio value (g = 0.6) than the commonly used one (g = 0.7) can change not only repeatability and comparability but also the reported dependency on the FVF imaging marker. Hum Brain Mapp 39:24-41, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Ratio of sequential chromatograms for quantitative analysis and peak deconvolution: Application to standard addition method and process monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.

    1990-08-01

    This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less

  15. Simultaneous determination of the brand new two-drug combination for the treatment of hepatitis C: Sofosbuvir/ledipasvir using smart spectrophotometric methods manipulating ratio spectra

    NASA Astrophysics Data System (ADS)

    Eissa, Maya S.

    2017-08-01

    In this work, various sensitive and selective spectrophotometric methods were first introduced for the simultaneous determination of sofosbuvir and ledipasvir in their binary mixture without preliminary separation. Ledipasvir was determined simply by zero-order spectrophotometric method at its λmax = 333.0 nm in a linear range of 2.5-30.0 μg/ml without any interference of sofosbuvir even in low or high concentrations and with mean percentage recovery of 100.05 ± 0.632. Sofosbuvir can be quantitatively estimated by one of the following smart spectrophotometric methods based on ratio spectra developed for the resolution of the overlapped spectra of their binary mixture; ratio difference spectrophotometric method (RD) by computing the difference between the amplitudes of sofosbuvir ratio spectra at 228 nm and 270 nm, first derivative (DD1) of ratio spectra by measuring the sum of amplitude of trough and peak at 265 nm and 277 nm, respectively, ratio subtraction (RS) spectrophotometric method in which sofosbuvir can be successfully determined at its λmax = 261.0 nm and mean centering (MC) of ratio spectra by measuring the mean centering values at 270 nm. All of the above mentioned spectrophotometric methods can estimate sofosbuvir in a linear range of 7.5-90.0 μg/ml with mean percentage recoveries of 100.57 ± 0.810, 99.92 ± 0.759, 99.51 ± 0.475 and 100.75 ± 0.672, respectively. These methods were successfully applied to the analysis of their combined dosage form and bulk powder. The adopted methods were also validated as per ICH guidelines and statistically compared to an in-house HPLC method.

  16. Simultaneous determination of the brand new two-drug combination for the treatment of hepatitis C: Sofosbuvir/ledipasvir using smart spectrophotometric methods manipulating ratio spectra.

    PubMed

    Eissa, Maya S

    2017-08-05

    In this work, various sensitive and selective spectrophotometric methods were first introduced for the simultaneous determination of sofosbuvir and ledipasvir in their binary mixture without preliminary separation. Ledipasvir was determined simply by zero-order spectrophotometric method at its λ max =333.0nm in a linear range of 2.5-30.0μg/ml without any interference of sofosbuvir even in low or high concentrations and with mean percentage recovery of 100.05±0.632. Sofosbuvir can be quantitatively estimated by one of the following smart spectrophotometric methods based on ratio spectra developed for the resolution of the overlapped spectra of their binary mixture; ratio difference spectrophotometric method (RD) by computing the difference between the amplitudes of sofosbuvir ratio spectra at 228nm and 270nm, first derivative (DD 1 ) of ratio spectra by measuring the sum of amplitude of trough and peak at 265nm and 277nm, respectively, ratio subtraction (RS) spectrophotometric method in which sofosbuvir can be successfully determined at its λ max =261.0nm and mean centering (MC) of ratio spectra by measuring the mean centering values at 270nm. All of the above mentioned spectrophotometric methods can estimate sofosbuvir in a linear range of 7.5-90.0μg/ml with mean percentage recoveries of 100.57±0.810, 99.92±0.759, 99.51±0.475 and 100.75±0.672, respectively. These methods were successfully applied to the analysis of their combined dosage form and bulk powder. The adopted methods were also validated as per ICH guidelines and statistically compared to an in-house HPLC method. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Practical in-situ determination of ortho-para hydrogen ratios via fiber-optic based Raman spectroscopy

    DOE PAGES

    Sutherland, Liese -Marie; Knudson, James N.; Mocko, Michal; ...

    2015-12-17

    An experiment was designed and developed to prototype a fiber-optic-based laser system, which measures the ratio of ortho-hydrogen to para-hydrogen in an operating neutron moderator system at the Los Alamos Neutron Science Center (LANSCE) spallation neutron source. Preliminary measurements resulted in an ortho to para ratio of 3.06:1, which is within acceptable agreement with the previously published ratio. As a result, the successful demonstration of Raman Spectroscopy for this measurement is expected to lead to a practical method that can be applied for similar in-situ measurements at operating neutron spallation sources.

  18. Practical in-situ determination of ortho-para hydrogen ratios via fiber-optic based Raman spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutherland, Liese -Marie; Knudson, James N.; Mocko, Michal

    An experiment was designed and developed to prototype a fiber-optic-based laser system, which measures the ratio of ortho-hydrogen to para-hydrogen in an operating neutron moderator system at the Los Alamos Neutron Science Center (LANSCE) spallation neutron source. Preliminary measurements resulted in an ortho to para ratio of 3.06:1, which is within acceptable agreement with the previously published ratio. As a result, the successful demonstration of Raman Spectroscopy for this measurement is expected to lead to a practical method that can be applied for similar in-situ measurements at operating neutron spallation sources.

  19. Multi-Frequency Signal Detection Based on Frequency Exchange and Re-Scaling Stochastic Resonance and Its Application to Weak Fault Diagnosis

    PubMed Central

    Leng, Yonggang; Fan, Shengbo

    2018-01-01

    Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method. PMID:29693577

  20. Advancing Physically-Based Flow Simulations of Alluvial Systems Through Atmospheric Noble Gases and the Novel 37Ar Tracer Method

    NASA Astrophysics Data System (ADS)

    Schilling, Oliver S.; Gerber, Christoph; Partington, Daniel J.; Purtschert, Roland; Brennwald, Matthias S.; Kipfer, Rolf; Hunkeler, Daniel; Brunner, Philip

    2017-12-01

    To provide a sound understanding of the sources, pathways, and residence times of groundwater water in alluvial river-aquifer systems, a combined multitracer and modeling experiment was carried out in an important alluvial drinking water wellfield in Switzerland. 222Rn, 3H/3He, atmospheric noble gases, and the novel 37Ar-method were used to quantify residence times and mixing ratios of water from different sources. With a half-life of 35.1 days, 37Ar allowed to successfully close a critical observational time gap between 222Rn and 3H/3He for residence times of weeks to months. Covering the entire range of residence times of groundwater in alluvial systems revealed that, to quantify the fractions of water from different sources in such systems, atmospheric noble gases and helium isotopes are tracers suited for end-member mixing analysis. A comparison between the tracer-based mixing ratios and mixing ratios simulated with a fully-integrated, physically-based flow model showed that models, which are only calibrated against hydraulic heads, cannot reliably reproduce mixing ratios or residence times of alluvial river-aquifer systems. However, the tracer-based mixing ratios allowed the identification of an appropriate flow model parametrization. Consequently, for alluvial systems, we recommend the combination of multitracer studies that cover all relevant residence times with fully-coupled, physically-based flow modeling to better characterize the complex interactions of river-aquifer systems.

  1. Scene-based nonuniformity correction for airborne point target detection systems.

    PubMed

    Zhou, Dabiao; Wang, Dejiang; Huo, Lijun; Liu, Rang; Jia, Ping

    2017-06-26

    Images acquired by airborne infrared search and track (IRST) systems are often characterized by nonuniform noise. In this paper, a scene-based nonuniformity correction method for infrared focal-plane arrays (FPAs) is proposed based on the constant statistics of the received radiation ratios of adjacent pixels. The gain of each pixel is computed recursively based on the ratios between adjacent pixels, which are estimated through a median operation. Then, an elaborate mathematical model describing the error propagation, derived from random noise and the recursive calculation procedure, is established. The proposed method maintains the characteristics of traditional methods in calibrating the whole electro-optics chain, in compensating for temporal drifts, and in not preserving the radiometric accuracy of the system. Moreover, the proposed method is robust since the frame number is the only variant, and is suitable for real-time applications owing to its low computational complexity and simplicity of implementation. The experimental results, on different scenes from a proof-of-concept point target detection system with a long-wave Sofradir FPA, demonstrate the compelling performance of the proposed method.

  2. Estimation of selected streamflow statistics for a network of low-flow partial-record stations in areas affected by Base Realignment and Closure (BRAC) in Maryland

    USGS Publications Warehouse

    Ries, Kernell G.; Eng, Ken

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima

  3. Arbitrary-ratio power splitter based on nonlinear multimode interference coupler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajaldini, Mehdi; Young Researchers and Elite Club, Baft Branch, Islamic Azad University, Baft; Jafri, Mohd Zubir Mat

    2015-04-24

    We propose an ultra-compact multimode interference (MMI) power splitter based on nonlinear effects from simulations using nonlinear modal propagation analysis (NMPA) cooperation with finite difference Method (FDM) to access free choice of splitting ratio. Conventional multimode interference power splitter could only obtain a few discrete ratios. The power splitting ratio may be adjusted continuously while the input set power is varying by a tunable laser. In fact, using an ultra- compact MMI with a simple structure that is launched by a tunable nonlinear input fulfills the problem of arbitrary-ratio in integrated photonics circuits. Silicon on insulator (SOI) is used asmore » the offered material due to the high contrast refractive index and Centro symmetric properties. The high-resolution images at the end of the multimode waveguide in the simulated power splitter have a high power balance, whereas access to a free choice of splitting ratio is not possible under the linear regime in the proposed length range except changes in the dimension for any ratio. The compact dimensions and ideal performance of the device are established according to optimized parameters. The proposed regime can be extended to the design of M×N arbitrary power splitters ratio for programmable logic devices in all optical digital signal processing. The results of this study indicate that nonlinear modal propagation analysis solves the miniaturization problem for all-optical devices based on MMI couplers to achieve multiple functions in a compact planar integrated circuit and also overcomes the limitations of previously proposed methods for nonlinear MMI.« less

  4. The elastic ratio: introducing curvature into ratio-based image segmentation.

    PubMed

    Schoenemann, Thomas; Masnou, Simon; Cremers, Daniel

    2011-09-01

    We present the first ratio-based image segmentation method that allows imposing curvature regularity of the region boundary. Our approach is a generalization of the ratio framework pioneered by Jermyn and Ishikawa so as to allow penalty functions that take into account the local curvature of the curve. The key idea is to cast the segmentation problem as one of finding cyclic paths of minimal ratio in a graph where each graph node represents a line segment. Among ratios whose discrete counterparts can be globally minimized with our approach, we focus in particular on the elastic ratio [Formula: see text] that depends, given an image I, on the oriented boundary C of the segmented region candidate. Minimizing this ratio amounts to finding a curve, neither small nor too curvy, through which the brightness flux is maximal. We prove the existence of minimizers for this criterion among continuous curves with mild regularity assumptions. We also prove that the discrete minimizers provided by our graph-based algorithm converge, as the resolution increases, to continuous minimizers. In contrast to most existing segmentation methods with computable and meaningful, i.e., nondegenerate, global optima, the proposed approach is fully unsupervised in the sense that it does not require any kind of user input such as seed nodes. Numerical experiments demonstrate that curvature regularity allows substantial improvement of the quality of segmentations. Furthermore, our results allow drawing conclusions about global optima of a parameterization-independent version of the snakes functional: the proposed algorithm allows determining parameter values where the functional has a meaningful solution and simultaneously provides the corresponding global solution.

  5. System and method for determining an ammonia generation rate in a three-way catalyst

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Min; Perry, Kevin L; Kim, Chang H

    A system according to the principles of the present disclosure includes a rate determination module, a storage level determination module, and an air/fuel ratio control module. The rate determination module determines an ammonia generation rate in a three-way catalyst based on a reaction efficiency and a reactant level. The storage level determination module determines an ammonia storage level in a selective catalytic reduction (SCR) catalyst positioned downstream from the three-way catalyst based on the ammonia generation rate. The air/fuel ratio control module controls an air/fuel ratio of an engine based on the ammonia storage level.

  6. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    NASA Astrophysics Data System (ADS)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  7. Determination of cellulose I crystallinity by FT-Raman spectroscopy

    Treesearch

    Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph

    2009-01-01

    Two new methods based on FT-Raman spectroscopy, one simple, based on band intensity ratio, and the other, using a partial least-squares (PLS) regression model, are proposed to determine cellulose I crystallinity. In the simple method, crystallinity in semicrystalline cellulose I samples was determined based on univariate regression that was first developed using the...

  8. Confidence intervals for predicting lumber strength properties based on ratios of percentiles from two Weibull populations.

    Treesearch

    Richard A. Johnson; James W. Evans; David W. Green

    2003-01-01

    Ratios of strength properties of lumber are commonly used to calculate property values for standards. Although originally proposed in terms of means, ratios are being applied without regard to position in the distribution. It is now known that lumber strength properties are generally not normally distributed. Therefore, nonparametric methods are often used to derive...

  9. Spectrophotometric resolution of the severely overlapped spectra of clotrimazole with dexamethasone in cream dosage form by mathematical manipulation steps.

    PubMed

    Lotfy, Hayam Mahmoud; Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Shehata, Mostafa Abd El-Atty

    2018-09-05

    Several spectrophotometric techniques were recently conducted for the determination of binary mixtures of clotrimazole (CLT) and dexamethasone acetate (DA) without any separation procedure. The methods were based on generation of ratio spectra of mixture then applying simple mathematic manipulation. The zero order absorption spectra of both drugs could be obtained by the constant center (CC) method. The concentration of both CLT and DA could be obtained by constant value via amplitude difference (CV-AD) method depending on ratio spectra, Ratio difference (RD) method where the difference between the amplitudes at two wavelengths (ΔP) on the ratio spectra could eliminate the contribution of the interfering substance and bring the concentration of the other, and the derivative ratio (DD 1 ) method where the derivative of the ratio spectra was able to determine the drug of interest without any interference of the other one. While the concentration of DA could be measured after graphical manipulation as concentration using the novel advanced concentration value method (ACV). Calibration graphs were linear in the range of 75-550 μg/mL for CLT and 2-20 μg/mL for DA. The methods applied to the binary mixture under study were successfully applied for the simultaneous determination of the two drugs in synthetic mixtures and in their combined form Mycuten-D cream. The results obtained were compared statistically to each other and to the official methods. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    NASA Astrophysics Data System (ADS)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  11. A Hybrid Seismic Inversion Method for V P/V S Ratio and Its Application to Gas Identification

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Zhang, Hongbing; Han, Feilong; Xiao, Wei; Shang, Zuoping

    2018-03-01

    The ratio of compressional wave velocity to shear wave velocity (V P/V S ratio) has established itself as one of the most important parameters in identifying gas reservoirs. However, considering that seismic inversion process is highly non-linear and geological conditions encountered may be complex, a direct estimation of V P/V S ratio from pre-stack seismic data remains a challenging task. In this paper, we propose a hybrid seismic inversion method to estimate V P/V S ratio directly. In this method, post- and pre-stack inversions are combined in which the pre-stack inversion for V P/V S ratio is driven by the post-stack inversion results (i.e., V P and density). In particular, the V P/V S ratio is considered as a model parameter and is directly inverted from the pre-stack inversion based on the exact Zoeppritz equation. Moreover, anisotropic Markov random field is employed in order to regularise the inversion process as well as taking care of geological structures (boundaries) information. Aided by the proposed hybrid inversion strategy, the directional weighting coefficients incorporated in the anisotropic Markov random field neighbourhoods are quantitatively calculated by the anisotropic diffusion method. The synthetic test demonstrates the effectiveness of the proposed inversion method. In particular, given low quality of the pre-stack data and high heterogeneity of the target layers in the field data, the proposed inversion method reveals the detailed model of V P/V S ratio that can successfully identify the gas-bearing zones.

  12. Automated segmentation of ultrasonic breast lesions using statistical texture classification and active contour based on probability distance.

    PubMed

    Liu, Bo; Cheng, H D; Huang, Jianhua; Tian, Jiawei; Liu, Jiafeng; Tang, Xianglong

    2009-08-01

    Because of its complicated structure, low signal/noise ratio, low contrast and blurry boundaries, fully automated segmentation of a breast ultrasound (BUS) image is a difficult task. In this paper, a novel segmentation method for BUS images without human intervention is proposed. Unlike most published approaches, the proposed method handles the segmentation problem by using a two-step strategy: ROI generation and ROI segmentation. First, a well-trained texture classifier categorizes the tissues into different classes, and the background knowledge rules are used for selecting the regions of interest (ROIs) from them. Second, a novel probability distance-based active contour model is applied for segmenting the ROIs and finding the accurate positions of the breast tumors. The active contour model combines both global statistical information and local edge information, using a level set approach. The proposed segmentation method was performed on 103 BUS images (48 benign and 55 malignant). To validate the performance, the results were compared with the corresponding tumor regions marked by an experienced radiologist. Three error metrics, true-positive ratio (TP), false-negative ratio (FN) and false-positive ratio (FP) were used for measuring the performance of the proposed method. The final results (TP = 91.31%, FN = 8.69% and FP = 7.26%) demonstrate that the proposed method can segment BUS images efficiently, quickly and automatically.

  13. A high-performance liquid chromatography-electronic circular dichroism online method for assessing the absolute enantiomeric excess and conversion ratio of asymmetric reactions

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Wang, Mingchao; Li, Li; Yin, Dali

    2017-03-01

    Asymmetric reactions often need to be evaluated during the synthesis of chiral compounds. However, traditional evaluation methods require the isolation of the individual enantiomer, which is tedious and time-consuming. Thus, it is desirable to develop simple, practical online detection methods. We developed a method based on high-performance liquid chromatography-electronic circular dichroism (HPLC-ECD) that simultaneously analyzes the material conversion ratio and absolute optical purity of each enantiomer. In particular, only a reverse-phase C18 column instead of a chiral column is required in our method because the ECD measurement provides a g-factor that describes the ratio of each enantiomer in the mixtures. We used our method to analyze the asymmetric hydrosilylation of β-enamino esters, and we discussed the advantage, feasibility, and effectiveness of this new methodology.

  14. Time-frequency domain SNR estimation and its application in seismic data processing

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen

    2014-08-01

    Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.

  15. Development of algorithms for detecting citrus canker based on hyperspectral reflectance imaging.

    PubMed

    Li, Jiangbo; Rao, Xiuqin; Ying, Yibin

    2012-01-15

    Automated discrimination of fruits with canker from other fruit with normal surface and different type of peel defects has become a helpful task to enhance the competitiveness and profitability of the citrus industry. Over the last several years, hyperspectral imaging technology has received increasing attention in the agricultural products inspection field. This paper studied the feasibility of classification of citrus canker from other peel conditions including normal surface and nine peel defects by hyperspectal imaging. A combination algorithm based on principal component analysis and the two-band ratio (Q(687/630)) method was proposed. Since fewer wavelengths were desired in order to develop a rapid multispectral imaging system, the canker classification performance of the two-band ratio (Q(687/630)) method alone was also evaluated. The proposed combination approach and two-band ratio method alone resulted in overall classification accuracy for training set samples and test set samples of 99.5%, 84.5% and 98.2%, 82.9%, respectively. The proposed combination approach was more efficient for classifying canker against various conditions under reflectance hyperspectral imagery. However, the two-band ratio (Q(687/630)) method alone also demonstrated effectiveness in discriminating citrus canker from normal fruit and other peel diseases except for copper burn and anthracnose. Copyright © 2011 Society of Chemical Industry.

  16. Using Approximate Bayesian Computation to infer sex ratios from acoustic data.

    PubMed

    Lehnen, Lisa; Schorcht, Wigbert; Karst, Inken; Biedermann, Martin; Kerth, Gerald; Puechmaille, Sebastien J

    2018-01-01

    Population sex ratios are of high ecological relevance, but are challenging to determine in species lacking conspicuous external cues indicating their sex. Acoustic sexing is an option if vocalizations differ between sexes, but is precluded by overlapping distributions of the values of male and female vocalizations in many species. A method allowing the inference of sex ratios despite such an overlap will therefore greatly increase the information extractable from acoustic data. To meet this demand, we developed a novel approach using Approximate Bayesian Computation (ABC) to infer the sex ratio of populations from acoustic data. Additionally, parameters characterizing the male and female distribution of acoustic values (mean and standard deviation) are inferred. This information is then used to probabilistically assign a sex to a single acoustic signal. We furthermore develop a simpler means of sex ratio estimation based on the exclusion of calls from the overlap zone. Applying our methods to simulated data demonstrates that sex ratio and acoustic parameter characteristics of males and females are reliably inferred by the ABC approach. Applying both the ABC and the exclusion method to empirical datasets (echolocation calls recorded in colonies of lesser horseshoe bats, Rhinolophus hipposideros) provides similar sex ratios as molecular sexing. Our methods aim to facilitate evidence-based conservation, and to benefit scientists investigating ecological or conservation questions related to sex- or group specific behaviour across a wide range of organisms emitting acoustic signals. The developed methodology is non-invasive, low-cost and time-efficient, thus allowing the study of many sites and individuals. We provide an R-script for the easy application of the method and discuss potential future extensions and fields of applications. The script can be easily adapted to account for numerous biological systems by adjusting the type and number of groups to be distinguished (e.g. age, social rank, cryptic species) and the acoustic parameters investigated.

  17. Real-Time Counting People in Crowded Areas by Using Local Empirical Templates and Density Ratios

    NASA Astrophysics Data System (ADS)

    Hung, Dao-Huu; Hsu, Gee-Sern; Chung, Sheng-Luen; Saito, Hideo

    In this paper, a fast and automated method of counting pedestrians in crowded areas is proposed along with three contributions. We firstly propose Local Empirical Templates (LET), which are able to outline the foregrounds, typically made by single pedestrians in a scene. LET are extracted by clustering foregrounds of single pedestrians with similar features in silhouettes. This process is done automatically for unknown scenes. Secondly, comparing the size of group foreground made by a group of pedestrians to that of appropriate LET captured in the same image patch with the group foreground produces the density ratio. Because of the local scale normalization between sizes, the density ratio appears to have a bound closely related to the number of pedestrians who induce the group foreground. Finally, to extract the bounds of density ratios for groups of different number of pedestrians, we propose a 3D human models based simulation in which camera viewpoints and pedestrians' proximity are easily manipulated. We collect hundreds of typical occluded-people patterns with distinct degrees of human proximity and under a variety of camera viewpoints. Distributions of density ratios with respect to the number of pedestrians are built based on the computed density ratios of these patterns for extracting density ratio bounds. The simulation is performed in the offline learning phase to extract the bounds from the distributions, which are used to count pedestrians in online settings. We reveal that the bounds seem to be invariant to camera viewpoints and humans' proximity. The performance of our proposed method is evaluated with our collected videos and PETS 2009's datasets. For our collected videos with the resolution of 320x240, our method runs in real-time with good accuracy and frame rate of around 30 fps, and consumes a small amount of computing resources. For PETS 2009's datasets, our proposed method achieves competitive results with other methods tested on the same datasets [1], [2].

  18. Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method

    PubMed Central

    Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2012-01-01

    Objective Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Materials and Methods Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. Results A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. Conclusion The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials. PMID:22778560

  19. First derivative ratio spectrophotometric, HPTLC-densitometric, and HPLC determination of nicergoline in presence of its hydrolysis-induced degradation product.

    PubMed

    Ahmad, Abdel Kader S; Kawy, M Abdel; Nebsen, M

    2002-10-15

    Three methods are presented for the determination of Nicergoline in presence of its hydrolysis-induced degradation product. The first method was based on measurement of the first derivative of ratio spectra amplitude of Nicergoline at 291 nm. The second method was based on separation of Nicergoline from its degradation product followed by densitometric measurement of the spots at 287 nm. The separation was carried out on HPTLC silica gel F(254) plates, using methanol-ethyl acetate-glacial acetic acid (5:7:3, v/v/v) as mobile phase. The third method was based on high performance liquid chromatographic (HPLC) separation and determination of Nicergoline from its degradation product on a reversed phase, nucloesil C(18) column using a mobile phase of methanol-water-glacial acetic acid (80:20:0.1, v/v/v) with UV detection at 280 nm. Chlorpromazine hydrochloride was used as internal standard. Laboratory prepared mixtures containing different percentages of the degradation product were analysed by the proposed methods and satisfactory results were obtained. These methods have been successfully applied to the analysis of Nicergoline in Sermion tablets. The validities of these methods were ascertained by applying standard addition technique, the mean percentage recovery +/- R.S.D.% was found to be 99.47 +/- 0.752, 100.01 +/- 0.940, 99.75 +/- 0.740 for the first derivative of ratio spectra method, the HPTLC method and the HPLC method, respectively. The proposed methods were statistically compared with the manufacturer's HPLC method of analysis of Nicergoline and no significant difference was found with respect to both precision and accuracy. They have the advantage of being stability indicating. Therefore, they can be used for routine analysis of the drug in quality control laboratories. Copyright 2002 Elsevier Science B.V.

  20. New method of extracting information of arterial oxygen saturation based on ∑ | 𝚫 |

    NASA Astrophysics Data System (ADS)

    Dai, Wenting; Lin, Ling; Li, Gang

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  1. New method of extracting information of arterial oxygen saturation based on ∑|𝚫 |

    NASA Astrophysics Data System (ADS)

    Wenting, Dai; Ling, Lin; Gang, Li

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  2. Study on the oxidative stability of poly a-olefin aviation lubricating base oil using PDSC method

    NASA Astrophysics Data System (ADS)

    Wu, N.; Fei, Y. W.; Yang, H. W.; Wang, Y. M.; Zong, Z. M.

    2016-08-01

    The oxidation stability of the domestic and import PAO aviation lubricating base oil was studied by the method of pressurized differential scanning calorimetry testing the initial oxidation temperature. The effects of anti-oxidants were investigated, and the best ratio of antioxidants was determined.

  3. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion

    PubMed Central

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002

  4. The telecommunications and data acquisition report

    NASA Technical Reports Server (NTRS)

    Renzetti, N. A. (Editor)

    1981-01-01

    Developments in Earth-based ratio technology as applied to the Deep Space Network are reported. Topics include ratio astronomy and spacecraft tracking networks. Telemetric methods and instrumentation are described. Station control and system technology for space communication is discussed. Special emphasis is placed on network data processing.

  5. Epidemiology of Multiple Sclerosis in Austria.

    PubMed

    Salhofer-Polanyi, Sabine; Cetin, Hakan; Leutmezer, Fritz; Baumgartner, Anna; Blechinger, Stephan; Dal-Bianco, Assunta; Altmann, Patrick; Bajer-Kornek, Barbara; Rommer, Paulus; Guger, Michael; Leitner-Bohn, Doris; Reichardt, Berthold; Alasti, Farideh; Temsch, Wilhelm; Stamm, Tanja

    2017-01-01

    To assess the incidence rate and prevalence ratio of multiple sclerosis (MS) in Austria. Hospital discharge diagnosis and MS-specific immunomodulatory treatment prescriptions from public health insurances, covering 98% of Austrian citizens with health insurance were used to extrapolate incidence and prevalence numbers based on the capture-recapture method. A total of 1,392,629 medication prescriptions and 40,956 hospitalizations were extracted from 2 data sources, leading to a total of 13,205 patients. The incidence rate and prevalence ratio of MS in Austria based on the capture-recapture method were 19.5/100,000 person-years (95% CI 14.3-24.7) and 158.9/100,000 (95% CI 141.2-175.9), respectively. Female to male ratio was 1.6 for incidence and 2.2 for prevalence. Incidence rates and prevalence ratios of MS in our study are within the upper range of comparable studies across many European countries as well as the United States. © 2017 S. Karger AG, Basel.

  6. Mean individual muscle activities and ratios of total muscle activities in a selective muscle strengthening experiment: the effects of lower limb muscle activity based on mediolateral slope angles during a one-leg stance.

    PubMed

    Lee, Sang-Yeol

    2016-09-01

    [Purpose] The purpose of this study was to provide basic data for research on selective muscle strengthening by identifying mean muscle activities and calculating muscle ratios for use in developing strengthening methods. [Subjects and Methods] Twenty-one healthy volunteers were included in this study. Muscle activity was measured during a one-leg stance under 6 conditions of slope angle: 0°, 5°, 10°, 15°, 20°, and 25°. The data used in the analysis were root mean square and % total muscle activity values. [Results] There were significant differences in the root mean square of the gluteus medius, the hamstring, and the medial gastrocnemius muscles. There were significant differences in % total muscle activity of the medial gastrocnemius. [Conclusion] Future studies aimed at developing selective muscle strengthening methods are likely to yield more effective results by using muscle activity ratios based on electromyography data.

  7. Mapping Quantitative Traits in Unselected Families: Algorithms and Examples

    PubMed Central

    Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David

    2009-01-01

    Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016

  8. Change-in-ratio estimators for populations with more than two subclasses

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1991-01-01

    Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.

  9. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    PubMed Central

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  10. Epidemiologic research using probabilistic outcome definitions.

    PubMed

    Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S

    2015-01-01

    Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.

  11. General method for eliminating wave reflection in 2D photonic crystal waveguides by introducing extra scatterers based on interference cancellation of waves

    NASA Astrophysics Data System (ADS)

    Huang, Hao; Ouyang, Zhengbiao

    2018-01-01

    We propose a general method for eliminating the reflection of waves in 2 dimensional photonic crystal waveguides (2D-PCWs), a kind of 2D material, by introducing extra scatterers inside the 2D-PCWs. The intrinsic reflection in 2D-PCWs is compensated by the backward-scattered waves from these scatterers, so that the overall reflection is greatly reduced and the insertion loss is improved accordingly. We first present the basic theory for the compensation method. Then, as a demonstration, we give four examples of extremely-low-reflection and high-transmission 90°bent 2D-PCWs created according to the method proposed. In the four examples, it is demonstrated by plane-wave expansion method and finite-difference time-domain method that the 90°bent 2D-PCWs can have high transmission ratio greater than 90% in a wide range of operating frequency, and the highest transmission ratio can be greater than 99.95% with a return loss higher than 43 dB, better than that in other typical 90°bent 2D-PCWs. With our method, the bent 2D-PCWs can be optimized to obtain high transmission ratio at different operating wavelengths. As a further application of this method, a waveguide-based optical bridge for light crossing is presented, showing an optimum return loss of 46.85 dB, transmission ratio of 99.95%, and isolation rates greater than 41.77 dB. The method proposed provides also a useful way for improving conventional waveguides made of cables, fibers, or metal walls in the optical, infrared, terahertz, and microwave bands.

  12. An alternative approach to characterize nonlinear site effects

    USGS Publications Warehouse

    Zhang, R.R.; Hartzell, S.; Liang, J.; Hu, Y.

    2005-01-01

    This paper examines the rationale of a method of nonstationary processing and analysis, referred to as the Hilbert-Huang transform (HHT), for its application to a recording-based approach in quantifying influences of soil nonlinearity in site response. In particular, this paper first summarizes symptoms of soil nonlinearity shown in earthquake recordings, reviews the Fourier-based approach to characterizing nonlinearity, and offers justifications for the HHT in addressing nonlinearity issues. This study then uses the HHT method to analyze synthetic data and recordings from the 1964 Niigata and 2001 Nisqually earthquakes. In doing so, the HHT-based site response is defined as the ratio of marginal Hilbert amplitude spectra, alternative to the Fourier-based response that is the ratio of Fourier amplitude spectra. With the Fourier-based approach in studies of site response as a reference, this study shows that the alternative HHT-based approach is effective in characterizing soil nonlinearity and nonlinear site response.

  13. Capillary-valve-based fabrication of ion-selective membrane junction for electrokinetic sample preconcentration in PDMS chip.

    PubMed

    Liu, Vincent; Song, Yong-Ak; Han, Jongyoon

    2010-06-07

    In this paper, we report a novel method for fabricating ion-selective membranes in poly(dimethylsiloxane) (PDMS)/glass-based microfluidic preconcentrators. Based on the concept of capillary valves, this fabrication method involves filling a lithographically patterned junction between two microchannels with an ion-selective material such as Nafion resin; subsequent curing results in a high aspect-ratio membrane for use in electrokinetic sample preconcentration. To demonstrate the concentration performance of this high-aspect-ratio, ion-selective membrane, we integrated the preconcentrator with a surface-based immunoassay for R-Phycoerythrin (RPE). Using a 1x PBS buffer system, the preconcentrator-enhanced immunoassay showed an approximately 100x improvement in sensitivity within 30 min. This is the first time that an electrokinetic microfluidic preconcentrator based on ion concentration polarization (ICP) has been used in high ionic strength buffer solutions to enhance the sensitivity of a surface-based immunoassay.

  14. A social activity and physical contact-based routing algorithm in mobile opportunistic networks for emergency response to sudden disasters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoming; Lin, Yaguang; Zhang, Shanshan; Cai, Zhipeng

    2017-05-01

    Sudden disasters such as earthquake, flood and hurricane necessitate the employment of communication networks to carry out emergency response activities. Routing has a significant impact on the functionality, performance and flexibility of communication networks. In this article, the routing problem is studied considering the delivery ratio of messages, the overhead ratio of messages and the average delay of messages in mobile opportunistic networks (MONs) for enterprise-level emergency response communications in sudden disaster scenarios. Unlike the traditional routing methods for MONS, this article presents a new two-stage spreading and forwarding dynamic routing algorithm based on the proposed social activity degree and physical contact factor for mobile customers. A new modelling method for describing a dynamic evolving process of the topology structure of a MON is first proposed. Then a multi-copy spreading strategy based on the social activity degree of nodes and a single-copy forwarding strategy based on the physical contact factor between nodes are designed. Compared with the most relevant routing algorithms such as Epidemic, Prophet, Labelled-sim, Dlife-comm and Distribute-sim, the proposed routing algorithm can significantly increase the delivery ratio of messages, and decrease the overhead ratio and average delay of messages.

  15. Comparative study on DuPont analysis and DEA models for measuring stock performance using financial ratio

    NASA Astrophysics Data System (ADS)

    Arsad, Roslah; Shaari, Siti Nabilah Mohd; Isa, Zaidi

    2017-11-01

    Determining stock performance using financial ratio is challenging for many investors and researchers. Financial ratio can indicate the strengths and weaknesses of a company's stock performance. There are five categories of financial ratios namely liquidity, efficiency, leverage, profitability and market ratios. It is important to interpret the ratio correctly for proper financial decision making. The purpose of this study is to compare the performance of listed companies in Bursa Malaysia using Data Envelopment Analysis (DEA) and DuPont analysis Models. The study is conducted in 2015 involving 116 consumer products companies listed in Bursa Malaysia. The estimation method of Data Envelopment Analysis computes the efficiency scores and ranks the companies accordingly. The Alirezaee and Afsharian's method of analysis based Charnes, Cooper and Rhodes (CCR) where Constant Return to Scale (CRS) is employed. The DuPont analysis is a traditional tool for measuring the operating performance of companies. In this study, DuPont analysis is used to evaluate three different aspects such as profitability, efficiency of assets utilization and financial leverage. Return on Equity (ROE) is also calculated in DuPont analysis. This study finds that both analysis models provide different rankings of the selected samples. Hypothesis testing based on Pearson's correlation, indicates that there is no correlation between rankings produced by DEA and DuPont analysis. The DEA ranking model proposed by Alirezaee and Asharian is unstable. The method cannot provide complete ranking because the values of Balance Index is equal and zero.

  16. Investigation of Heat and Mass Transfer and Irreversibility Phenomena Within a Three-Dimensional Tilted Enclosure for Different Shapes

    NASA Astrophysics Data System (ADS)

    Oueslati, F.; Ben-Beya, B.

    2018-01-01

    Three-dimensional thermosolutal natural convection and entropy generation within an inclined enclosure is investigated in the current study. A numerical method based on the finite volume method and a full multigrid technique is implemented to solve the governing equations. Effects of various parameters, namely, the aspect ratio, buoyancy ratio, and tilt angle on the flow patterns and entropy generation are predicted and discussed.

  17. A simple distillation method to extract bromine from natural water and salt samples for isotope analysis by multi-collector inductively coupled plasma mass spectrometry.

    PubMed

    Eggenkamp, H G M; Louvat, P

    2018-04-30

    In natural samples bromine is present in trace amounts, and measurement of stable Br isotopes necessitates its separation from the matrix. Most methods described previously need large samples or samples with high Br/Cl ratios. The use of metals as reagents, proposed in previous Br distillation methods, must be avoided for multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) analyses, because of risk of cross-contamination, since the instrument is also used to measure stable isotopes of metals. Dedicated to water and evaporite samples with low Br/Cl ratios, the proposed method is a simple distillation that separates bromide from chloride for isotopic analyses by MC-ICP-MS. It is based on the difference in oxidation potential between chloride and bromide in the presence of nitric acid. The sample is mixed with dilute (1:5) nitric acid in a distillation flask and heated over a candle flame for 10 min. The distillate (bromine) is trapped in an ammonia solution and reduced to bromide. Chloride is only distilled to a very small extent. The obtained solution can be measured directly by MC-ICP-MS for stable Br isotopes. The method was tested for a variety of volumes, ammonia concentrations, pH values and distillation times and compared with the classic ion-exchange chromatography method. The method more efficiently separates Br from Cl, so that samples with lower Br/Cl ratios can be analysed, with Br isotope data in agreement with those obtained by previous methods. Unlike other Br extraction methods based on oxidation, the distillation method presented here does not use any metallic ion for redox reactions that could contaminate the mass spectrometer. It is efficient in separating Br from samples with low Br/Cl ratios. The method ensures reproducible recovery yields and a long-term reproducibility of ±0.11‰ (1 standard deviation). The distillation method was successfully applied to samples with low Br/Cl ratios and low Br amounts (down to 20 μg). Copyright © 2018 John Wiley & Sons, Ltd.

  18. A Comparison of Methods for Computing the Residual Resistivity Ratio of High-Purity Niobium

    PubMed Central

    Splett, J. D.; Vecchia, D. F.; Goodrich, L. F.

    2011-01-01

    We compare methods for estimating the residual resistivity ratio (RRR) of high-purity niobium and investigate the effects of using different functional models. RRR is typically defined as the ratio of the electrical resistances measured at 273 K (the ice point) and 4.2 K (the boiling point of helium at standard atmospheric pressure). However, pure niobium is superconducting below about 9.3 K, so the low-temperature resistance is defined as the normal-state (i.e., non-superconducting state) resistance extrapolated to 4.2 K and zero magnetic field. Thus, the estimated value of RRR depends significantly on the model used for extrapolation. We examine three models for extrapolation based on temperature versus resistance, two models for extrapolation based on magnetic field versus resistance, and a new model based on the Kohler relationship that can be applied to combined temperature and field data. We also investigate the possibility of re-defining RRR so that the quantity is not dependent on extrapolation. PMID:26989580

  19. Comparative Study of Novel Ratio Spectra and Isoabsorptive Point Based Spectrophotometric Methods: Application on a Binary Mixture of Ascorbic Acid and Rutin.

    PubMed

    Darwish, Hany W; Bakheit, Ahmed H; Naguib, Ibrahim A

    2016-01-01

    This paper presents novel methods for spectrophotometric determination of ascorbic acid (AA) in presence of rutin (RU) (coformulated drug) in their combined pharmaceutical formulation. The seven methods are ratio difference (RD), isoabsorptive_RD (Iso_RD), amplitude summation (A_Sum), isoabsorptive point, first derivative of the ratio spectra ((1)DD), mean centering (MCN), and ratio subtraction (RS). On the other hand, RU was determined directly by measuring the absorbance at 358 nm in addition to the two novel Iso_RD and A_Sum methods. The work introduced in this paper aims to compare these different methods, showing the advantages for each and making a comparison of analysis results. The calibration curve is linear over the concentration range of 4-50 μg/mL for AA and RU. The results show the high performance of proposed methods for the analysis of the binary mixture. The optimum assay conditions were established and the proposed methods were successfully applied for the assay of the two drugs in laboratory prepared mixtures and combined pharmaceutical tablets with excellent recoveries. No interference was observed from common pharmaceutical additives.

  20. Comparative Study of Novel Ratio Spectra and Isoabsorptive Point Based Spectrophotometric Methods: Application on a Binary Mixture of Ascorbic Acid and Rutin

    PubMed Central

    Darwish, Hany W.; Bakheit, Ahmed H.; Naguib, Ibrahim A.

    2016-01-01

    This paper presents novel methods for spectrophotometric determination of ascorbic acid (AA) in presence of rutin (RU) (coformulated drug) in their combined pharmaceutical formulation. The seven methods are ratio difference (RD), isoabsorptive_RD (Iso_RD), amplitude summation (A_Sum), isoabsorptive point, first derivative of the ratio spectra (1DD), mean centering (MCN), and ratio subtraction (RS). On the other hand, RU was determined directly by measuring the absorbance at 358 nm in addition to the two novel Iso_RD and A_Sum methods. The work introduced in this paper aims to compare these different methods, showing the advantages for each and making a comparison of analysis results. The calibration curve is linear over the concentration range of 4–50 μg/mL for AA and RU. The results show the high performance of proposed methods for the analysis of the binary mixture. The optimum assay conditions were established and the proposed methods were successfully applied for the assay of the two drugs in laboratory prepared mixtures and combined pharmaceutical tablets with excellent recoveries. No interference was observed from common pharmaceutical additives. PMID:26885440

  1. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  2. High extinction ratio integrated optical modulator for quantum telecommunication systems

    NASA Astrophysics Data System (ADS)

    Tronev, A.; Parfenov, M.; Agruzov, P.; Ilichev, I.; Shamray, A.

    2018-01-01

    A method for increasing the extinction ratio of integrated optical Mach-Zehnder modulators based on LiNbO3 via the photorefractive effect is proposed. The influence of the photorefractive effect on the X- and Y-splitters of intensity modulators is experimentally studied. An increase in the modulator extinction ratio by 17 dB (from 30 to 47 dB) is obtained. It is shown that fabricated modulators with a high extinction ratio are important for quantum key distribution systems.

  3. Inverse metal-assisted chemical etching produces smooth high aspect ratio InP nanostructures.

    PubMed

    Kim, Seung Hyun; Mohseni, Parsian K; Song, Yi; Ishihara, Tatsumi; Li, Xiuling

    2015-01-14

    Creating high aspect ratio (AR) nanostructures by top-down fabrication without surface damage remains challenging for III-V semiconductors. Here, we demonstrate uniform, array-based InP nanostructures with lateral dimensions as small as sub-20 nm and AR > 35 using inverse metal-assisted chemical etching (I-MacEtch) in hydrogen peroxide (H2O2) and sulfuric acid (H2SO4), a purely solution-based yet anisotropic etching method. The mechanism of I-MacEtch, in contrast to regular MacEtch, is explored through surface characterization. Unique to I-MacEtch, the sidewall etching profile is remarkably smooth, independent of metal pattern edge roughness. The capability of this simple method to create various InP nanostructures, including high AR fins, can potentially enable the aggressive scaling of InP based transistors and optoelectronic devices with better performance and at lower cost than conventional etching methods.

  4. System and method for controlling an engine based on ammonia storage in multiple selective catalytic reduction catalysts

    DOEpatents

    Sun, MIn; Perry, Kevin L.

    2015-11-20

    A system according to the principles of the present disclosure includes a storage estimation module and an air/fuel ratio control module. The storage estimation module estimates a first amount of ammonia stored in a first selective catalytic reduction (SCR) catalyst and estimates a second amount of ammonia stored in a second SCR catalyst. The air/fuel ratio control module controls an air/fuel ratio of an engine based on the first amount, the second amount, and a temperature of a substrate disposed in the second SCR catalyst.

  5. Asymmetric design for Compound Elliptical Concentrators (CEC) and its geometric flux implications

    NASA Astrophysics Data System (ADS)

    Jiang, Lun; Winston, Roland

    2015-08-01

    The asymmetric compound elliptical concentrator (CEC) has been a less discussed subject in the nonimaging optics society. The conventional way of understanding an ideal concentrator is based on maximizing the concentration ratio based on a uniformed acceptance angle. Although such an angle does not exist in the case of CEC, the thermodynamic laws still hold and we can produce concentrators with the maximum concentration ratio allowed by them. Here we restate the problem and use the string method to solve this general problem. Built on the solution, we can discover groups of such ideal concentrators using geometric flux field, or flowline method.

  6. Generation of Turbulent Inflow Conditions for Pipe Flow via an Annular Ribbed Turbulator

    NASA Astrophysics Data System (ADS)

    Moallemi, Nima; Brinkerhoff, Joshua

    2016-11-01

    The generation of turbulent inflow conditions adds significant computational expense to direct numerical simulations (DNS) of turbulent pipe flows. Typical approaches involve introducing boxes of isotropic turbulence to the velocity field at the inlet of the pipe. In the present study, an alternative method is proposed that incurs a lower computational cost and allows the anisotropy observed in pipe turbulence to be physically captured. The method is based on a periodic DNS of a ribbed turbulator upstream of the inlet boundary of the pipe. The Reynolds number based on the bulk velocity and pipe diameter is 5300 and the blockage ratio (BR) is 0.06 based on the rib height and pipe diameter. The pitch ratio is defined as the ratio of rib streamwise spacing to rib height and is varied between 1.7 and 5.0. The generation of turbulent flow structures downstream of the ribbed turbulator are identified and discussed. Suitability of this method for accurate representation of turbulent inflow conditions is assessed through comparison of the turbulent mean properties, fluctuations, Reynolds stress profiles, and spectra with published pipe flow DNS studies. The DNS results achieve excellent agreement with the numerical and experimental data available in the literature.

  7. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    PubMed

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  8. Investigating Measurement Invariance in Computer-Based Personality Testing: The Impact of Using Anchor Items on Effect Size Indices

    ERIC Educational Resources Information Center

    Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N.

    2015-01-01

    A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…

  9. A Novel Method for Preparing Auxetic Foam from Closed-cell Polymer Foam Based on Steam Penetration and Condensation (SPC) Process.

    PubMed

    Fan, Donglei; Li, Minggang; Qiu, Jian; Xing, Haiping; Jiang, Zhiwei; Tang, Tao

    2018-05-31

    Auxetic materials are a class of materials possessing negative Poisson's ratio. Here we establish a novel method for preparing auxetic foam from closed-cell polymer foam based on steam penetration and condensation (SPC) process. Using polyethylene (PE) closed-cell foam as an example, the resultant foams treated by SPC process present negative Poisson's ratio during stretching and compression testing. The effect of steam-treated temperature and time on the conversion efficiency of negative Poisson's ratio foam is investigated, and the mechanism of SPC method for forming re-entrant structure is discussed. The results indicate that the presence of enough steam within the cells is a critical factor for the negative Poisson's ratio conversion in the SPC process. The pressure difference caused by steam condensation is the driving force for the conversion from conventional closed-cell foam to the negative Poisson's ratio foam. Furthermore, the applicability of SPC process for fabricating auxetic foam is studied by replacing PE foam by polyvinyl chloride (PVC) foam with closed-cell structure or replacing water steam by ethanol steam. The results verify the universality of SPC process for fabricating auxetic foams from conventional foams with closed-cell structure. In addition, we explored potential application of the obtained auxetic foams by SPC process in the fabrication of shape memory polymer materials.

  10. Development of an extraction method for perchlorate in soils.

    PubMed

    Cañas, Jaclyn E; Patel, Rashila; Tian, Kang; Anderson, Todd A

    2006-03-01

    Perchlorate originates as a contaminant in the environment from its use in solid rocket fuels and munitions. The current US EPA methods for perchlorate determination via ion chromatography using conductivity detection do not include recommendations for the extraction of perchlorate from soil. This study evaluated and identified appropriate conditions for the extraction of perchlorate from clay loam, loamy sand, and sandy soils. Based on the results of this evaluation, soils should be extracted in a dry, ground (mortar and pestle) state with Milli-Q water in a 1 ratio 1 soil ratio water ratio and diluted no more than 5-fold before analysis. When sandy soils were extracted in this manner, the calculated method detection limit was 3.5 microg kg(-1). The findings of this study have aided in the establishment of a standardized extraction method for perchlorate in soil.

  11. Identifying the critical financial ratios for stocks evaluation: A fuzzy delphi approach

    NASA Astrophysics Data System (ADS)

    Mokhtar, Mazura; Shuib, Adibah; Mohamad, Daud

    2014-12-01

    Stocks evaluation has always been an interesting and challenging problem for both researchers and practitioners. Generally, the evaluation can be made based on a set of financial ratios. Nevertheless, there are a variety of financial ratios that can be considered and if all ratios in the set are placed into the evaluation process, data collection would be more difficult and time consuming. Thus, the objective of this paper is to identify the most important financial ratios upon which to focus in order to evaluate the stock's performance. For this purpose, a survey was carried out using an approach which is based on an expert judgement, namely the Fuzzy Delphi Method (FDM). The results of this study indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share and debt to equity are the most important ratios.

  12. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  13. Improved neutron-gamma discrimination for a 6Li-glass neutron detector using digital signal analysis methods

    DOE PAGES

    Wang, Cai -Lin; Riedel, Richard A.

    2016-01-14

    A 6Li-glass scintillator (GS20) based neutron Anger camera was developed for time-of-flight single-crystal diffraction instruments at SNS. Traditional pulse-height analysis (PHA) for neutron-gamma discrimination (NGD) resulted in the neutron-gamma efficiency ratio (defined as NGD ratio) on the order of 10 4. The NGD ratios of Anger cameras need to be improved for broader applications including neutron reflectometers. For this purpose, five digital signal analysis methods of individual waveforms from PMTs were proposed using: i). pulse-amplitude histogram; ii). power spectrum analysis combined with the maximum pulse amplitude; iii). two event parameters (a 1, b 0) obtained from Wiener filter; iv). anmore » effective amplitude (m) obtained from an adaptive least-mean-square (LMS) filter; and v). a cross-correlation (CC) coefficient between an individual waveform and a reference. The NGD ratios can be 1-102 times those from traditional PHA method. A brighter scintillator GS2 has better NGD ratio than GS20, but lower neutron detection efficiency. The ultimate NGD ratio is related to the ambient, high-energy background events. Moreover, our results indicate the NGD capability of neutron Anger cameras can be improved using digital signal analysis methods and brighter neutron scintillators.« less

  14. Experimental investigation of the effects of compound angle holes on film cooling effectiveness and heat transfer performance using a transient liquid crystal thermometry technique

    NASA Astrophysics Data System (ADS)

    Seager, David J.; Liburdy, James A.

    1997-11-01

    To further understand the effect of both compound angle holes and hole shaping on film cooling, detailed heat transfer measurements were obtained using hue based thermochromic liquid crystal method. The data were analyzed to measure both the full surface adiabatic effectiveness and heat transfer coefficient. The compound angles that were evaluated consist of holes that were aligned 0 degrees, 45 degrees, 60 degrees and 90 degrees to the main cross flow direction. Hole shaping variations from the traditional cylindrical shaped hole include forward diffused and laterally diffused hole geometries. Geometric parameters that were selected were the length to diameter ratio of 3.0, and the inclination angle 35 degrees. A density ratio of 1.55 was obtained for all teste. For each set of conditions the blowing ratio was varied to be 0.88, 1.25, and 1.88. Adiabatic effectiveness was obtained using a steady state test, while an active heating surface was used to determine the heat transfer coefficient using a transient method. The experimental method provides a unique method of analyzing a three-temperature heat transfer problem by providing detailed surface transport properties. Based on these results for the different hole geometries at each blowing ratio conclusions are drawn relative to the effects of compound angle holes on the overall film cooling performance.

  15. Trainable multiscript orientation detection

    NASA Astrophysics Data System (ADS)

    Van Beusekom, Joost; Rangoni, Yves; Breuel, Thomas M.

    2010-01-01

    Detecting the correct orientation of document images is an important step in large scale digitization processes, as most subsequent document analysis and optical character recognition methods assume upright position of the document page. Many methods have been proposed to solve the problem, most of which base on ascender to descender ratio computation. Unfortunately, this cannot be used for scripts having no descenders nor ascenders. Therefore, we present a trainable method using character similarity to compute the correct orientation. A connected component based distance measure is computed to compare the characters of the document image to characters whose orientation is known. This allows to detect the orientation for which the distance is lowest as the correct orientation. Training is easily achieved by exchanging the reference characters by characters of the script to be analyzed. Evaluation of the proposed approach showed accuracy of above 99% for Latin and Japanese script from the public UW-III and UW-II datasets. An accuracy of 98.9% was obtained for Fraktur on a non-public dataset. Comparison of the proposed method to two methods using ascender / descender ratio based orientation detection shows a significant improvement.

  16. A modified artificial immune system based pattern recognition approach -- an application to clinic diagnostics

    PubMed Central

    Zhao, Weixiang; Davis, Cristina E.

    2011-01-01

    Objective This paper introduces a modified artificial immune system (AIS)-based pattern recognition method to enhance the recognition ability of the existing conventional AIS-based classification approach and demonstrates the superiority of the proposed new AIS-based method via two case studies of breast cancer diagnosis. Methods and materials Conventionally, the AIS approach is often coupled with the k nearest neighbor (k-NN) algorithm to form a classification method called AIS-kNN. In this paper we discuss the basic principle and possible problems of this conventional approach, and propose a new approach where AIS is integrated with the radial basis function – partial least square regression (AIS-RBFPLS). Additionally, both the two AIS-based approaches are compared with two classical and powerful machine learning methods, back-propagation neural network (BPNN) and orthogonal radial basis function network (Ortho-RBF network). Results The diagnosis results show that: (1) both the AIS-kNN and the AIS-RBFPLS proved to be a good machine leaning method for clinical diagnosis, but the proposed AIS-RBFPLS generated an even lower misclassification ratio, especially in the cases where the conventional AIS-kNN approach generated poor classification results because of possible improper AIS parameters. For example, based upon the AIS memory cells of “replacement threshold = 0.3”, the average misclassification ratios of two approaches for study 1 are 3.36% (AIS-RBFPLS) and 9.07% (AIS-kNN), and the misclassification ratios for study 2 are 19.18% (AIS-RBFPLS) and 28.36% (AIS-kNN); (2) the proposed AIS-RBFPLS presented its robustness in terms of the AIS-created memory cells, showing a smaller standard deviation of the results from the multiple trials than AIS-kNN. For example, using the result from the first set of AIS memory cells as an example, the standard deviations of the misclassification ratios for study 1 are 0.45% (AIS-RBFPLS) and 8.71% (AIS-kNN) and those for study 2 are 0.49% (AIS-RBFPLS) and 6.61% (AIS-kNN); and (3) the proposed AIS-RBFPLS classification approaches also yielded better diagnosis results than two classical neural network approaches of BPNN and Ortho-RBF network. Conclusion In summary, this paper proposed a new machine learning method for complex systems by integrating the AIS system with RBFPLS. This new method demonstrates its satisfactory effect on classification accuracy for clinical diagnosis, and also indicates its wide potential applications to other diagnosis and detection problems. PMID:21515033

  17. Radiometric calibration of SPOT 2 HRV - A comparison of three methods

    NASA Technical Reports Server (NTRS)

    Biggar, Stuart F.; Dinguirard, Magdeleine C.; Gellman, David I.; Henry, Patrice; Jackson, Ray D.; Moran, M. S.; Slater, Philip N.

    1991-01-01

    Three methods for determining an absolute radiometric calibration of a spacecraft optical sensor are compared. They are the well-known reflectance-based and radiance-based methods and a new method based on measurements of the ratio of diffuse-to-global irradiance at the ground. The latter will be described in detail and the comparison of the three approaches will be made with reference to the SPOT-2 HRV cameras for a field campaign 1990-06-19 through 1990-06-24 at the White Sands Missile Range in New Mexico.

  18. Application of the ultrametric distance to portfolio taxonomy. Critical approach and comparison with other methods

    NASA Astrophysics Data System (ADS)

    Skórnik-Pokarowska, Urszula; Orłowski, Arkadiusz

    2004-12-01

    We calculate the ultrametric distance between the pairs of stocks that belong to the same portfolio. The ultrametric distance allows us to distinguish groups of shares that are related. In this way, we can construct a portfolio taxonomy that can be used for constructing an efficient portfolio. We also construct a portfolio taxonomy based not only on stock prices but also on economic indices such as liquidity ratio, debt ratio and sales profitability ratio. We show that a good investment strategy can be obtained by applying to the portfolio chosen by the taxonomy method the so-called Constant Rebalanced Portfolio.

  19. A quantitative index of intracranial cerebrospinal fluid distribution in normal pressure hydrocephalus using an MRI-based processing technique.

    PubMed

    Tsunoda, A; Mitsuoka, H; Sato, K; Kanayama, S

    2000-06-01

    Our purpose was to quantify the intracranial cerebrospinal fluid (CSF) volume components using an original MRI-based segmentation technique and to investigate whether a CSF volume index is useful for diagnosis of normal pressure hydrocephalus (NPH). We studied 59 subjects: 16 patients with NPH, 14 young and 13 elderly normal volunteers, and 16 patients with cerebrovascular disease. Images were acquired on a 1.5-T system, using a 3D-fast asymmetrical spin-echo (FASE) method. A region-growing method (RGM) was used to extract the CSF spaces from the FASE images. Ventricular volume (VV) and intracranial CSF volume (ICV) were measured, and a VV/ICV ratio was calculated. Mean VV and VV/ICV ratio were higher in the NPH group than in the other groups, and the differences were statistically significant, whereas the mean ICV value in the NPH group was not significantly increased. Of the 16 patients in the NPH group, 13 had VV/ICV ratios above 30%. In contrast, no subject in the other groups had a VV/ICV ratios higher than 30%. We conclude that these CSF volume parameters, especially the VV/ICV ratio, are useful for the diagnosis of NPH.

  20. A method to estimate weight and dimensions of large and small gas turbine engines

    NASA Technical Reports Server (NTRS)

    Onat, E.; Klees, G. W.

    1979-01-01

    A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.

  1. Resolution of overlapped spectra for the determination of ternary mixture using different and modified spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Moussa, Bahia Abbas; El-Zaher, Asmaa Ahmed; Mahrouse, Marianne Alphonse; Ahmed, Maha Said

    2016-08-01

    Four new spectrophotometric methods were developed, applied to resolve the overlapped spectra of a ternary mixture of [aliskiren hemifumarate (ALS)-amlodipine besylate (AM)-hydrochlorothiazide (HCT)] and to determine the three drugs in pure form and in combined dosage form. Method A depends on simultaneous determination of ALS, AM and HCT using principal component regression and partial least squares chemometric methods. In Method B, a modified isosbestic spectrophotometric method was applied for the determination of the total concentration of ALS and HCT by measuring the absorbance at 274.5 nm (isosbestic point, Aiso). On the other hand, the concentration of HCT in ternary mixture with ALS and AM could be calculated without interference using first derivative spectrophotometric method by measuring the amplitude at 279 nm (zero crossing of ALS and zero value of AM). Thus, the content of ALS was calculated by subtraction. Method C, double divisor first derivative ratio spectrophotometry (double divisor 1DD method), was based on that for the determination of one drug, the ratio spectra were obtained by dividing the absorption spectra of its different concentrations by the sum of the absorption spectra of the other two drugs as a double divisor. The first derivative of the obtained ratio spectra were then recorded using the appropriate smoothing factor. The amplitudes at 291 nm, 380 nm and 274.5 nm were selected for the determination of ALS, AM and HCT in their ternary mixture, respectively. Method D was based on mean centering of ratio spectra. The mean centered values at 287, 295.5 and 269 nm were recorded and used for the determination of ALS, AM and HCT, respectively. The developed methods were validated according to ICH guidelines and proved to be accurate, precise and selective. Satisfactory results were obtained by applying the proposed methods to the analysis of pharmaceutical dosage form.

  2. Resolution of overlapped spectra for the determination of ternary mixture using different and modified spectrophotometric methods.

    PubMed

    Moussa, Bahia Abbas; El-Zaher, Asmaa Ahmed; Mahrouse, Marianne Alphonse; Ahmed, Maha Said

    2016-08-05

    Four new spectrophotometric methods were developed, applied to resolve the overlapped spectra of a ternary mixture of [aliskiren hemifumarate (ALS)-amlodipine besylate (AM)-hydrochlorothiazide (HCT)] and to determine the three drugs in pure form and in combined dosage form. Method A depends on simultaneous determination of ALS, AM and HCT using principal component regression and partial least squares chemometric methods. In Method B, a modified isosbestic spectrophotometric method was applied for the determination of the total concentration of ALS and HCT by measuring the absorbance at 274.5nm (isosbestic point, Aiso). On the other hand, the concentration of HCT in ternary mixture with ALS and AM could be calculated without interference using first derivative spectrophotometric method by measuring the amplitude at 279nm (zero crossing of ALS and zero value of AM). Thus, the content of ALS was calculated by subtraction. Method C, double divisor first derivative ratio spectrophotometry (double divisor (1)DD method), was based on that for the determination of one drug, the ratio spectra were obtained by dividing the absorption spectra of its different concentrations by the sum of the absorption spectra of the other two drugs as a double divisor. The first derivative of the obtained ratio spectra were then recorded using the appropriate smoothing factor. The amplitudes at 291nm, 380nm and 274.5nm were selected for the determination of ALS, AM and HCT in their ternary mixture, respectively. Method D was based on mean centering of ratio spectra. The mean centered values at 287, 295.5 and 269nm were recorded and used for the determination of ALS, AM and HCT, respectively. The developed methods were validated according to ICH guidelines and proved to be accurate, precise and selective. Satisfactory results were obtained by applying the proposed methods to the analysis of pharmaceutical dosage form. Copyright © 2016. Published by Elsevier B.V.

  3. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  4. Comparison of Peak-area Ratios and Percentage Peak Area Derived from HPLC-evaporative Light Scattering and Refractive Index Detectors for Palm Oil and its Fractions.

    PubMed

    Ping, Bonnie Tay Yen; Aziz, Haliza Abdul; Idris, Zainab

    2018-01-01

    High-Performance Liquid Chromatography (HPLC) methods via evaporative light scattering (ELS) and refractive index (RI) detectors are used by the local palm oil industry to monitor the TAG profiles of palm oil and its fractions. The quantitation method used is based on area normalization of the TAG components and expressed as percentage area. Although not frequently used, peak-area ratios based on TAG profiles are a possible qualitative method for characterizing the TAG of palm oil and its fractions. This paper aims to compare these two detectors in terms of peak-area ratio, percentage peak area composition, and TAG elution profiles. The triacylglycerol (TAG) composition for palm oil and its fractions were analysed under similar HPLC conditions i.e. mobile phase and column. However, different sample concentrations were used for the detectors while remaining within the linearity limits of the detectors. These concentrations also gave a good baseline resolved separation for all the TAGs components. The results of the ELSD method's percentage area composition for the TAGs of palm oil and its fractions differed from those of RID. This indicates an unequal response of TAGs for palm oil and its fractions using the ELSD, also affecting the peak area ratios. They were found not to be equivalent to those obtained using the HPLC-RID. The ELSD method showed a better baseline separation for the TAGs components, with a more stable baseline as compared with the corresponding HPLC-RID. In conclusion, the percentage area compositions and peak-area ratios for palm oil and its fractions as derived from HPLC-ELSD and RID were not equivalent due to different responses of TAG components to the ELSD detector. The HPLC-RID has a better accuracy for percentage area composition and peak-area ratio because the TAG components response equally to the detector.

  5. Quantitative determination of zopiclone and its impurity by four different spectrophotometric methods.

    PubMed

    Abdelrahman, Maha M; Naguib, Ibrahim A; El Ghobashy, Mohamed R; Ali, Nesma A

    2015-02-25

    Four simple, sensitive and selective spectrophotometric methods are presented for determination of Zopiclone (ZPC) and its impurity, one of its degradation products, namely; 2-amino-5-chloropyridine (ACP). Method A is a dual wavelength spectrophotometry; where two wavelengths (252 and 301 nm for ZPC, and 238 and 261 nm for ACP) were selected for each component in such a way that difference in absorbance is zero for the second one. Method B is isoabsorptive ratio method by combining the isoabsorptive point (259.8 nm) in the ratio spectrum using ACP as a divisor and the ratio difference for a single step determination of both components. Method C is third derivative (D(3)) spectrophotometric method which allows determination of both ZPC at 283.6 nm and ACP at 251.6 nm without interference of each other. Method D is based on measuring the peak amplitude of the first derivative of the ratio spectra (DD(1)) at 263.2 nm for ZPC and 252 nm for ACP. The suggested methods were validated according to ICH guidelines and can be applied for routine analysis in quality control laboratories. Statistical analysis of the results obtained from the proposed methods and those obtained from the reported method has been carried out revealing high accuracy and good precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Polarization splitter based on interference effects in all-solid photonic crystal fibers.

    PubMed

    Mao, Dong; Guan, Chunying; Yuan, Libo

    2010-07-01

    We propose a novel kind of polarization splitter in all-solid photonic crystal fibers based on the mode interference effects. Both the full-vector finite-element method and the semi-vector three-dimensional beam propagation method are employed to design and analyze the characteristics of the splitter. Numerical simulations show that x-polarized and y-polarized modes are split entirely along with 6.8 mm long propagation. An extinction ratio of more than 20 dB and a crosstalk of less than -20 dB are obtained within the wavelength range of 1.541-1.556 microm. The extinction ratio and the crosstalk at 1.55 microm are 28.9 and -29.0 dB for x polarization, while the extinction ratio and the crosstalk at 1.55 microm are 29.9 and -29.8 dB for y polarization, respectively.

  7. System and method for controlling ammonia levels in a selective catalytic reduction catalyst using a nitrogen oxide sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    A system according to the principles of the present disclosure includes an air/fuel ratio determination module and an emission level determination module. The air/fuel ratio determination module determines an air/fuel ratio based on input from an air/fuel ratio sensor positioned downstream from a three-way catalyst that is positioned upstream from a selective catalytic reduction (SCR) catalyst. The emission level determination module selects one of a predetermined value and an input based on the air/fuel ratio. The input is received from a nitrogen oxide sensor positioned downstream from the three-way catalyst. The emission level determination module determines an ammonia level basedmore » on the one of the predetermined value and the input received from the nitrogen oxide sensor.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Berry, M. L..; Grieme, M.

    We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.

  9. Spectrophotometric Method for the Determination of Two Coformulated Drugs with Highly Different Concentrations. Application on Vildagliptin and Metformin Hydrochloride

    NASA Astrophysics Data System (ADS)

    Zaazaa, H. E.; Elzanfaly, E. S.; Soudi, A. T.; Salem, M. Y.

    2016-03-01

    A new smart simple validated spectrophotometric method was developed for the determination of two drugs one of which is in a very low concentration compared to the other. The method is based on spiking and dilution then simple mathematical manipulation of the absorbance spectra. This method was applied for the determination of a binary mixture of vildagliptin and metformin hydrochloride in the ratio 50:850 in laboratory prepared mixtures containing both drugs in this ratio and in pharmaceutical dosage form with good recoveries. The developed method was validated according to ICH guidelines and can be used for routine quality control testing.

  10. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    PubMed

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc.

  11. Evaluation method for the potential functionome harbored in the genome and metagenome.

    PubMed

    Takami, Hideto; Taniguchi, Takeaki; Moriya, Yuki; Kuwahara, Tomomi; Kanehisa, Minoru; Goto, Susumu

    2012-12-12

    One of the main goals of genomic analysis is to elucidate the comprehensive functions (functionome) in individual organisms or a whole community in various environments. However, a standard evaluation method for discerning the functional potentials harbored within the genome or metagenome has not yet been established. We have developed a new evaluation method for the potential functionome, based on the completion ratio of Kyoto Encyclopedia of Genes and Genomes (KEGG) functional modules. Distribution of the completion ratio of the KEGG functional modules in 768 prokaryotic species varied greatly with the kind of module, and all modules primarily fell into 4 patterns (universal, restricted, diversified and non-prokaryotic modules), indicating the universal and unique nature of each module, and also the versatility of the KEGG Orthology (KO) identifiers mapped to each one. The module completion ratio in 8 phenotypically different bacilli revealed that some modules were shared only in phenotypically similar species. Metagenomes of human gut microbiomes from 13 healthy individuals previously determined by the Sanger method were analyzed based on the module completion ratio. Results led to new discoveries in the nutritional preferences of gut microbes, believed to be one of the mutualistic representations of gut microbiomes to avoid nutritional competition with the host. The method developed in this study could characterize the functionome harbored in genomes and metagenomes. As this method also provided taxonomical information from KEGG modules as well as the gene hosts constructing the modules, interpretation of completion profiles was simplified and we could identify the complementarity between biochemical functions in human hosts and the nutritional preferences in human gut microbiomes. Thus, our method has the potential to be a powerful tool for comparative functional analysis in genomics and metagenomics, able to target unknown environments containing various uncultivable microbes within unidentified phyla.

  12. Detection of abrupt changes in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1984-01-01

    Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.

  13. Liquid-crystals electro-optic modulator based on electrohydrodynamic effects.

    PubMed

    Muriel, M A; Martin-Pereda, J A

    1980-11-01

    A new method of light modulation is reported. This method is based on the electro-optical properties of nematic materials and on the use of a new wedge structure. The advantages of this structure are the possibility of modulating nonpolarized light and the improved signal-to-noise ratio. The highest modulating frequency obtained is 25 kHz.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvador Palau, A.; Eder, S. D., E-mail: sabrina.eder@uib.no; Kaltenbacher, T.

    Time-of-flight (TOF) is a standard experimental technique for determining, among others, the speed ratio S (velocity spread) of a molecular beam. The speed ratio is a measure for the monochromaticity of the beam and an accurate determination of S is crucial for various applications, for example, for characterising chromatic aberrations in focussing experiments related to helium microscopy or for precise measurements of surface phonons and surface structures in molecular beam scattering experiments. For both of these applications, it is desirable to have as high a speed ratio as possible. Molecular beam TOF measurements are typically performed by chopping the beammore » using a rotating chopper with one or more slit openings. The TOF spectra are evaluated using a standard deconvolution method. However, for higher speed ratios, this method is very sensitive to errors related to the determination of the slit width and the beam diameter. The exact sensitivity depends on the beam diameter, the number of slits, the chopper radius, and the chopper rotation frequency. We present a modified method suitable for the evaluation of TOF measurements of high speed ratio beams. The modified method is based on a systematic variation of the chopper convolution parameters so that a set of independent measurements that can be fitted with an appropriate function are obtained. We show that with this modified method, it is possible to reduce the error by typically one order of magnitude compared to the standard method.« less

  15. Image Quality Analysis and Optical Performance Requirement for Micromirror-Based Lissajous Scanning Displays

    PubMed Central

    Du, Weiqi; Zhang, Gaofei; Ye, Liangchen

    2016-01-01

    Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions. PMID:27187390

  16. Image Quality Analysis and Optical Performance Requirement for Micromirror-Based Lissajous Scanning Displays.

    PubMed

    Du, Weiqi; Zhang, Gaofei; Ye, Liangchen

    2016-05-11

    Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions.

  17. Estimation of average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors by using the {sup 134}Cs/{sup 137}Cs ratio method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endo, T.; Sato, S.; Yamamoto, A.

    2012-07-01

    Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the {sup 134}Cs/{sup 137}Cs ratio method for measured radioactivities of {sup 134}Cs and {sup 137}Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured {sup 134}Cs/{sup 137}Cs ratio from the contaminated soil is 0.996{+-}0.07 as of March 11, 2011. Based on the {sup 134}Cs/{sup 137}Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2{+-}1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost themore » same evaluation values of {sup 134}Cs/ {sup 137}Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on {sup 134}Cs/{sup 137}Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)« less

  18. Soil classification based on cone penetration test (CPT) data in Western Central Java

    NASA Astrophysics Data System (ADS)

    Apriyono, Arwan; Yanto, Santoso, Purwanto Bekti; Sumiyanto

    2018-03-01

    This study presents a modified friction ratio range for soil classification i.e. gravel, sand, silt & clay and peat, using CPT data in Western Central Java. The CPT data was obtained solely from Soil Mechanic Laboratory of Jenderal Soedirman University that covers more than 300 sites within the study area. About 197 data were produced from data filtering process. IDW method was employed to interpolated friction ratio values in a regular grid point for soil classification map generation. Soil classification map was generated and presented using QGIS software. In addition, soil classification map with respect to modified friction ratio range was validated using 10% of total measurements. The result shows that silt and clay dominate soil type in the study area, which is in agreement with two popular methods namely Begemann and Vos. However, the modified friction ratio range produces 85% similarity with laboratory measurements whereby Begemann and Vos method yields 70% similarity. In addition, modified friction ratio range can effectively distinguish fine and coarse grains, thus useful for soil classification and subsequently for landslide analysis. Therefore, modified friction ratio range proposed in this study can be used to identify soil type for mountainous tropical region.

  19. Prediction of protein subcellular localization by weighted gene ontology terms.

    PubMed

    Chi, Sang-Mun

    2010-08-27

    We develop a new weighting approach of gene ontology (GO) terms for predicting protein subcellular localization. The weights of individual GO terms, corresponding to their contribution to the prediction algorithm, are determined by the term-weighting methods used in text categorization. We evaluate several term-weighting methods, which are based on inverse document frequency, information gain, gain ratio, odds ratio, and chi-square and its variants. Additionally, we propose a new term-weighting method based on the logarithmic transformation of chi-square. The proposed term-weighting method performs better than other term-weighting methods, and also outperforms state-of-the-art subcellular prediction methods. Our proposed method achieves 98.1%, 99.3%, 98.1%, 98.1%, and 95.9% overall accuracies for the animal BaCelLo independent dataset (IDS), fungal BaCelLo IDS, animal Höglund IDS, fungal Höglund IDS, and PLOC dataset, respectively. Furthermore, the close correlation between high-weighted GO terms and subcellular localizations suggests that our proposed method appropriately weights GO terms according to their relevance to the localizations. Copyright 2010 Elsevier Inc. All rights reserved.

  20. Predicting Consistency of Meningioma by Magnetic Resonance Imaging

    PubMed Central

    Smith, Kyle A.; Leever, John D.; Chamoun, Roukoz B.

    2015-01-01

    Objective Meningioma consistency is important because it affects the difficulty of surgery. To predict preoperative consistency, several methods have been proposed; however, they lack objectivity and reproducibility. We propose a new method for prediction based on tumor to cerebellar peduncle T2-weighted imaging intensity (TCTI) ratios. Design The magnetic resonance (MR) images of 20 consecutive patients were evaluated preoperatively. An intraoperative consistency scale was applied to these lesions prospectively by the operating surgeon based on Cavitron Ultrasonic Surgical Aspirator (Valleylab, Boulder, Colorado, United States) intensity. Tumors were classified as A, very soft; B, soft/intermediate; or C, fibrous. Using T2-weighted MR sequence, the TCTI ratio was calculated. Tumor consistency grades and TCTI ratios were then correlated. Results Of the 20 tumors evaluated prospectively, 7 were classified as very soft, 9 as soft/intermediate, and 4 as fibrous. TCTI ratios for fibrous tumors were all ≤ 1; very soft tumors were ≥ 1.8, except for one outlier of 1.66; and soft/intermediate tumors were > 1 to < 1.8. Conclusion We propose a method using quantifiable region-of-interest TCTIs as a uniform and reproducible way to predict tumor consistency. The intraoperative consistency was graded in an objective and clinically significant way and could lead to more efficient tumor resection. PMID:26225306

  1. Intensity ratio to improve black hole assessment in multiple sclerosis.

    PubMed

    Adusumilli, Gautam; Trinkaus, Kathryn; Sun, Peng; Lancia, Samantha; Viox, Jeffrey D; Wen, Jie; Naismith, Robert T; Cross, Anne H

    2018-01-01

    Improved imaging methods are critical to assess neurodegeneration and remyelination in multiple sclerosis. Chronic hypointensities observed on T1-weighted brain MRI, "persistent black holes," reflect severe focal tissue damage. Present measures consist of determining persistent black holes numbers and volumes, but do not quantitate severity of individual lesions. Develop a method to differentiate black and gray holes and estimate the severity of individual multiple sclerosis lesions using standard magnetic resonance imaging. 38 multiple sclerosis patients contributed images. Intensities of lesions on T1-weighted scans were assessed relative to cerebrospinal fluid intensity using commercial software. Magnetization transfer imaging, diffusion tensor imaging and clinical testing were performed to assess associations with T1w intensity-based measures. Intensity-based assessments of T1w hypointensities were reproducible and achieved > 90% concordance with expert rater determinations of "black" and "gray" holes. Intensity ratio values correlated with magnetization transfer ratios (R = 0.473) and diffusion tensor imaging metrics (R values ranging from 0.283 to -0.531) that have been associated with demyelination and axon loss. Intensity ratio values incorporated into T1w hypointensity volumes correlated with clinical measures of cognition. This method of determining the degree of hypointensity within multiple sclerosis lesions can add information to conventional imaging. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Estimating the signal-to-noise ratio of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Curran, Paul J.; Dungan, Jennifer L.

    1988-01-01

    To make the best use of narrowband airborne visible/infrared imaging spectrometer (AVIRIS) data, an investigator needs to know the ratio of signal to random variability or noise (signal-to-noise ratio or SNR). The signal is land cover dependent and varies with both wavelength and atmospheric absorption; random noise comprises sensor noise and intrapixel variability (i.e., variability within a pixel). The three existing methods for estimating the SNR are inadequate, since typical laboratory methods inflate while dark current and image methods deflate the SNR. A new procedure is proposed called the geostatistical method. It is based on the removal of periodic noise by notch filtering in the frequency domain and the isolation of sensor noise and intrapixel variability using the semi-variogram. This procedure was applied easily and successfully to five sets of AVIRIS data from the 1987 flying season and could be applied to remotely sensed data from broadband sensors.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batchelor, D.B.; Carreras, B.A.; Hirshman, S.P.

    Significant progress has been made in the development of new modest-size compact stellarator devices that could test optimization principles for the design of a more attractive reactor. These are 3 and 4 field period low-aspect-ratio quasi-omnigenous (QO) stellarators based on an optimization method that targets improved confinement, stability, ease of coil design, low-aspect-ratio, and low bootstrap current.

  4. Predictions of Crystal Structure Based on Radius Ratio: How Reliable Are They?

    ERIC Educational Resources Information Center

    Nathan, Lawrence C.

    1985-01-01

    Discussion of crystalline solids in undergraduate curricula often includes the use of radius ratio rules as a method for predicting which type of crystal structure is likely to be adopted by a given ionic compound. Examines this topic, establishing more definitive guidelines for the use and reliability of the rules. (JN)

  5. Planning For Retirement: Using Income Replacement Ratios in Setting Retirement Income Objectives.

    ERIC Educational Resources Information Center

    Palmer, Bruce A.

    1993-01-01

    This paper presents a method for higher education faculty and staff to assess pension plan objectives by determining a retirement income replacement ratio to maintain the salary-based preretirement standard of living. The paper describes the RETIRE Project which researches income replacement using the federal government's annual "Consumer…

  6. Remote pedestrians detection at night time in FIR Image using contrast filtering and locally projected region based CNN

    NASA Astrophysics Data System (ADS)

    Kim, Taehwan; Kim, Sungho

    2017-02-01

    This paper presents a novel method to detect the remote pedestrians. After producing the human temperature based brightness enhancement image using the temperature data input, we generates the regions of interest (ROIs) by the multiscale contrast filtering based approach including the biased hysteresis threshold and clustering, remote pedestrian's height, pixel area and central position information. Afterwards, we conduct local vertical and horizontal projection based ROI refinement and weak aspect ratio based ROI limitation to solve the problem of region expansion in the contrast filtering stage. Finally, we detect the remote pedestrians by validating the final ROIs using transfer learning with convolutional neural network (CNN) feature, following non-maximal suppression (NMS) with strong aspect ratio limitation to improve the detection performance. In the experimental results, we confirmed that the proposed contrast filtering and locally projected region based CNN (CFLP-CNN) outperforms the baseline method by 8% in term of logaveraged miss rate. Also, the proposed method is more effective than the baseline approach and the proposed method provides the better regions that are suitably adjusted to the shape and appearance of remote pedestrians, which makes it detect the pedestrian that didn't find in the baseline approach and are able to help detect pedestrians by splitting the people group into a person.

  7. Study on the coal mixing ratio optimization for a power plant

    NASA Astrophysics Data System (ADS)

    Jin, Y. A.; Cheng, J. W.; Bai, Q.; Li, W. B.

    2017-12-01

    For coal-fired power plants, the application of blended coal combustion has been a great issue due to the shortage and rising prices of high-rank coal. This paper describes the optimization of blending methods between Xing'an lignite coal, Shaltala lignite coal, Ura lignite coal, and Inner Mongolia bituminous coal. The multi-objective decision-making method based on fuzzy mathematics was used to determine the optimal blending ratio to improve the power plant coal-fired economy.

  8. Gain Switching for a Detection System to Accommodate a Newly Developed MALDI-Based Quantification Method

    NASA Astrophysics Data System (ADS)

    Ahn, Sung Hee; Hyeon, Taeghwan; Kim, Myung Soo; Moon, Jeong Hee

    2017-09-01

    In matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF), matrix-derived ions are routinely deflected away to avoid problems with ion detection. This, however, limits the use of a quantification method that utilizes the analyte-to-matrix ion abundance ratio. In this work, we will show that it is possible to measure this ratio by a minor instrumental modification of a simple form of MALDI-TOF. This involves detector gain switching. [Figure not available: see fulltext.

  9. An equivalent method for optimization of particle tuned mass damper based on experimental parametric study

    NASA Astrophysics Data System (ADS)

    Lu, Zheng; Chen, Xiaoyi; Zhou, Ying

    2018-04-01

    A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.

  10. Improvable method for Halon 1301 concentration measurement based on infrared absorption

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Lu, Song; Guan, Yu

    2015-09-01

    Halon 1301 has attached much interest because of its pervasive use as an effective fire suppressant agent in aircraft related fires, and the study of fire suppressant agent concentration measurement is especially of interest. In this work, a Halon 1301 concentration measurement method based on the Beer-Lambert law is developed. IR light is transmitted through mixed gas, and the light intensity with and without the agent present is measured. The intensity ratio is a function of the volume percentage of Halon 1301, and the voltage output of the detector is proportional to light intensity. As such, the relationship between the volume percentage and voltage ratio can be established. The concentration measurement system shows a relative error of the system less than ±2.50%, and a full scale error within 1.20%. This work also discusses the effect of temperature and relative humidity (RH) on the calibration. The experimental results of voltage ratio versus Halon 1301 volume percentage relationship show that the voltage ratio drops significantly as temperature rises from 25 to 100 °C, and it decreases as RH rises from 0% to 100%.

  11. Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods

    Treesearch

    Zhaodi Guo; Jingyun Fang; Yude Pan; Richard Birdsey

    2010-01-01

    Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...

  12. Studying aerosol light scattering based on aspect ratio distribution observed by fluorescence microscope.

    PubMed

    Li, Li; Zheng, Xu; Li, Zhengqiang; Li, Zhanhua; Dubovik, Oleg; Chen, Xingfeng; Wendisch, Manfred

    2017-08-07

    Particle shape is crucial to the properties of light scattered by atmospheric aerosol particles. A method of fluorescence microscopy direct observation was introduced to determine the aspect ratio distribution of aerosol particles. The result is comparable with that of the electron microscopic analysis. The measured aspect ratio distribution has been successfully applied in modeling light scattering and further in simulation of polarization measurements of the sun/sky radiometer. These efforts are expected to improve shape retrieval from skylight polarization by using directly measured aspect ratio distribution.

  13. Multi-Satellite Estimates of Land-Surface Properties for Determination of Energy and Water Budgets

    NASA Technical Reports Server (NTRS)

    Menzel, W. Paul; Rabin, Robert M.; Neale, Christopher M. U.; Gallo, Kevin; Diak, George R.

    1998-01-01

    Using the WETNET database, existing methods for the estimation of surface wetness from SSM/I data have been assessed and further developed. A physical-statistical method for optimal estimation of daily surface heat flux and Bowen ratio on the mesoscale has been developed and tested. This method is based on observations of daytime planetary boundary layer (PBL) growth from operational ravansonde and daytime land-surface temperature amplitude from Geostationary Operational Environmental (GOES) satellites. The mesoscale patterns of these heat fluxes have been compared with an AVHRR-based vegetation index and surface wetness (separately estimated from SSM/I and in situ observations). Cases of the 1988 Midwest drought and a surface/atmosphere moisture gradient (dry-line) in the southern Plains were studied. The analyses revealed significant variations in sensible heat flux (S(sub 0), and Bowen ratio, B(sub 0)) associated with vegetation cover and antecedent precipitation. Relationships for surface heat flux (and Bowen ratio) from antecedent precipitation and vegetation index have been developed and compared to other findings. Results from this project are reported in the following reviewed literature.

  14. Random walk-percolation-based modeling of two-phase flow in porous media: Breakthrough time and net to gross ratio estimation

    NASA Astrophysics Data System (ADS)

    Ganjeh-Ghazvini, Mostafa; Masihi, Mohsen; Ghaedi, Mojtaba

    2014-07-01

    Fluid flow modeling in porous media has many applications in waste treatment, hydrology and petroleum engineering. In any geological model, flow behavior is controlled by multiple properties. These properties must be known in advance of common flow simulations. When uncertainties are present, deterministic modeling often produces poor results. Percolation and Random Walk (RW) methods have recently been used in flow modeling. Their stochastic basis is useful in dealing with uncertainty problems. They are also useful in finding the relationship between porous media descriptions and flow behavior. This paper employs a simple methodology based on random walk and percolation techniques. The method is applied to a well-defined model reservoir in which the breakthrough time distributions are estimated. The results of this method and the conventional simulation are then compared. The effect of the net to gross ratio on the breakthrough time distribution is studied in terms of Shannon entropy. Use of the entropy plot allows one to assign the appropriate net to gross ratio to any porous medium.

  15. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuseppe Palmiotti

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  16. Determining the effective system damping of highway bridges.

    DOT National Transportation Integrated Search

    2009-06-01

    This project investigates four methods for modeling modal damping ratios of short-span and isolated : concrete bridges subjected to strong ground motion, which can be used for bridge seismic analysis : and design based on the response spectrum method...

  17. [Comparison study on subjective and objective measurements of the accommodative convergence to accommodation ratio].

    PubMed

    Xu, Jing-jing; Xu, Dan; Huang, Tao; Jiang, Jian; Lü, Fan

    2012-05-01

    To detect the accommodative convergence to accommodation (AC/A) ratios measured respectively by objective and subjective methods. The differences and its relative factors were explored. Forty young volunteers were measured by eye tracker to get the amount of convergence when fixating at the target at 100 cm, 50 cm, 33 cm and 25 cm and were measured by infrared auto-refractor to get corresponding accommodative responses. AC/A ratio based on these two measurements were compared with the calculated and the gradient AC/A ratio from Von Graefe tests. Mean value of stimulated AC/A ratio measured by eye tracker was higher than the calculated and gradient AC/A ratio by Von Graefe method (P = 0.003, 0.001). There are statistic correlation (r = 0.871, P = 0.000) and difference (P = 0.000) between stimulated AC/A ratio and response AC/A ratios both measured by eye tracker, and the difference trends to be greater with the higher AC/A ratio. The objective AC/A ratio is usually higher than the clinical subjective measurement because of more proximal effect. The response AC/A ratio measured objectively may reveal realistically the mutual effect and relationship between accommodation and convergence and it seems to be more credible to be the monitor parameter on progression of myopia in clinics.

  18. Rapid Estimation of Astaxanthin and the Carotenoid-to-Chlorophyll Ratio in the Green Microalga Chromochloris zofingiensis Using Flow Cytometry.

    PubMed

    Chen, Junhui; Wei, Dong; Pohnert, Georg

    2017-07-19

    The green microalga Chromochloris zofingiensis can accumulate significant amounts of valuable carotenoids, mainly natural astaxanthin, a product with applications in functional food, cosmetics, nutraceuticals, and with potential therapeutic value in cardiovascular and neurological diseases. To optimize the production of astaxanthin, it is essential to monitor the content of astaxanthin in algal cells during cultivation. The widely used HPLC (high-performance liquid chromatography) method for quantitative astaxanthin determination is time-consuming and laborious. In the present work, we present a method using flow cytometry (FCM) for in vivo determination of the astaxanthin content and the carotenoid-to-chlorophyll ratio (Car/Chl) in mixotrophic C. zofingiensis . The method is based on the assessment of fluorescent characteristics of cellular pigments. The mean fluorescence intensity (MFI) of living cells was determined by FCM to monitor pigment formation based on the correlation between MFI detected in particular channels (FL1: 533 ± 15 nm; FL2: 585 ± 20 nm; FL3: >670 nm) and pigment content in algal cells. Through correlation and regression analysis, a linear relationship was observed between MFI in FL2 (band-pass filter, emission at 585 nm in FCM) and astaxanthin content (in HPLC) and applied for predicting astaxanthin content. With similar procedures, the relationships between MFI in different channels and Car/Chl ratio in mixotrophic C. zofingiensis were also determined. Car/Chl ratios could be estimated by the ratios of MFI (FL1/FL3, FL2/FL3). FCM is thus a highly efficient and feasible method for rapid estimation of astaxanthin content in the green microalga C. zofingiensis . The rapid FCM method is complementary to the current HPLC method, especially for rapid evaluation and prediction of astaxanthin formation as it is required during the high-throughput culture in the laboratory and mass cultivation in industry.

  19. Rapid Estimation of Astaxanthin and the Carotenoid-to-Chlorophyll Ratio in the Green Microalga Chromochloris zofingiensis Using Flow Cytometry

    PubMed Central

    Chen, Junhui; Pohnert, Georg

    2017-01-01

    The green microalga Chromochloris zofingiensis can accumulate significant amounts of valuable carotenoids, mainly natural astaxanthin, a product with applications in functional food, cosmetics, nutraceuticals, and with potential therapeutic value in cardiovascular and neurological diseases. To optimize the production of astaxanthin, it is essential to monitor the content of astaxanthin in algal cells during cultivation. The widely used HPLC (high-performance liquid chromatography) method for quantitative astaxanthin determination is time-consuming and laborious. In the present work, we present a method using flow cytometry (FCM) for in vivo determination of the astaxanthin content and the carotenoid-to-chlorophyll ratio (Car/Chl) in mixotrophic C. zofingiensis. The method is based on the assessment of fluorescent characteristics of cellular pigments. The mean fluorescence intensity (MFI) of living cells was determined by FCM to monitor pigment formation based on the correlation between MFI detected in particular channels (FL1: 533 ± 15 nm; FL2: 585 ± 20 nm; FL3: >670 nm) and pigment content in algal cells. Through correlation and regression analysis, a linear relationship was observed between MFI in FL2 (band-pass filter, emission at 585 nm in FCM) and astaxanthin content (in HPLC) and applied for predicting astaxanthin content. With similar procedures, the relationships between MFI in different channels and Car/Chl ratio in mixotrophic C. zofingiensis were also determined. Car/Chl ratios could be estimated by the ratios of MFI (FL1/FL3, FL2/FL3). FCM is thus a highly efficient and feasible method for rapid estimation of astaxanthin content in the green microalga C. zofingiensis. The rapid FCM method is complementary to the current HPLC method, especially for rapid evaluation and prediction of astaxanthin formation as it is required during the high-throughput culture in the laboratory and mass cultivation in industry. PMID:28753934

  20. Using DNA fingerprints to infer familial relationships within NHANES III households

    PubMed Central

    Katki, Hormuzd A.; Sanders, Christopher L.; Graubard, Barry I.; Bergen, Andrew W.

    2009-01-01

    Developing, targeting, and evaluating genomic strategies for population-based disease prevention require population-based data. In response to this urgent need, genotyping has been conducted within the Third National Health and Nutrition Examination (NHANES III), the nationally-representative household-interview health survey in the U.S. However, before these genetic analyses can occur, family relationships within households must be accurately ascertained. Unfortunately, reported family relationships within NHANES III households based on questionnaire data are incomplete and inconclusive with regards to actual biological relatedness of family members. We inferred family relationships within households using DNA fingerprints (Identifiler®) that contain the DNA loci used by law enforcement agencies for forensic identification of individuals. However, performance of these loci for relationship inference is not well understood. We evaluated two competing statistical methods for relationship inference on pairs of household members: an exact likelihood ratio relying on allele frequencies to an Identical By State (IBS) likelihood ratio that only requires matching alleles. We modified these methods to account for genotyping errors and population substructure. The two methods usually agree on the rankings of the most likely relationships. However, the IBS method underestimates the likelihood ratio by not accounting for the informativeness of matching rare alleles. The likelihood ratio is sensitive to estimates of population substructure, and parent-child relationships are sensitive to the specified genotyping error rate. These loci were unable to distinguish second-degree relationships and cousins from being unrelated. The genetic data is also useful for verifying reported relationships and identifying data quality issues. An important by-product is the first explicitly nationally-representative estimates of allele frequencies at these ubiquitous forensic loci. PMID:20664713

  1. Chemometric Analysis of Multicomponent Biodegradable Plastics by Fourier Transform Infrared Spectrometry: The R-Matrix Method

    USDA-ARS?s Scientific Manuscript database

    A new chemometric method based on absorbance ratios from Fourier transform infrared spectra was devised to analyze multicomponent biodegradable plastics. The method uses the BeerLambert law to directly compute individual component concentrations and weight losses before and after biodegradation of c...

  2. Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation

    NASA Astrophysics Data System (ADS)

    Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.

    A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.

  3. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    PubMed Central

    Islam, Md. Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676

  4. Feature and score fusion based multiple classifier selection for iris recognition.

    PubMed

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  5. Hot melt extrusion of ion-exchange resin for taste masking.

    PubMed

    Tan, David Cheng Thiam; Ong, Jeremy Jianming; Gokhale, Rajeev; Heng, Paul Wan Sia

    2018-05-30

    Taste masking is important for some unpleasant tasting bioactives in oral dosage forms. Among many methods available for taste-masking, use of ion-exchange resin (IER) holds promise. IER combined with hot melt extrusion (HME) may offer additional advantages over solvent methods. IER provides taste masking by complexing with the drug ions and preventing drug dissolution in the mouth. Drug-IER complexation approaches described in literatures are mainly based either on batch processing or column eluting. These methods of drug-IER complexation have obvious limitations such as high solvent volume requirements, multiprocessing steps and extended processing time. Thus, the objective of this study was to develop a single-step, solvent-free, continuous HME process for complexation of drug-IER. The screening study evaluated drug to IER ratio, types of IER and drug complexation methods. In the screening study, a potassium salt of a weakly acidic carboxylate-based cationic IER was found suitable for the HME method. Thereafter, optimization study was conducted by varying HME process parameters such as screw speed, extrusion temperature and drug to IER ratio. It was observed that extrusion temperature and drug to IER ratio are imperative in drug-IER complexation through HME. In summary, this study has established the feasibility of a continuous complexation method for drug to IER using HME for taste masking. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  7. Comparative quantification of human intestinal bacteria based on cPCR and LDR/LCR

    PubMed Central

    Tang, Zhou-Rui; Li, Kai; Zhou, Yu-Xun; Xiao, Zhen-Xian; Xiao, Jun-Hua; Huang, Rui; Gu, Guo-Hao

    2012-01-01

    AIM: To establish a multiple detection method based on comparative polymerase chain reaction (cPCR) and ligase detection reaction (LDR)/ligase chain reaction (LCR) to quantify the intestinal bacterial components. METHODS: Comparative quantification of 16S rDNAs from different intestinal bacterial components was used to quantify multiple intestinal bacteria. The 16S rDNAs of different bacteria were amplified simultaneously by cPCR. The LDR/LCR was examined to actualize the genotyping and quantification. Two beneficial (Bifidobacterium, Lactobacillus) and three conditionally pathogenic bacteria (Enterococcus, Enterobacterium and Eubacterium) were used in this detection. With cloned standard bacterial 16S rDNAs, standard curves were prepared to validate the quantitative relations between the ratio of original concentrations of two templates and the ratio of the fluorescence signals of their final ligation products. The internal controls were added to monitor the whole detection flow. The quantity ratio between two bacteria was tested. RESULTS: cPCR and LDR revealed obvious linear correlations with standard DNAs, but cPCR and LCR did not. In the sample test, the distributions of the quantity ratio between each two bacterial species were obtained. There were significant differences among these distributions in the total samples. But these distributions of quantity ratio of each two bacteria remained stable among groups divided by age or sex. CONCLUSION: The detection method in this study can be used to conduct multiple intestinal bacteria genotyping and quantification, and to monitor the human intestinal health status as well. PMID:22294830

  8. Effects of rumen-degradable protein:rumen-undegradable protein ratio and corn processing on production performance, nitrogen efficiency, and feeding behavior of Holstein dairy cows.

    PubMed

    Savari, M; Khorvash, M; Amanlou, H; Ghorbani, G R; Ghasemi, E; Mirzaei, M

    2018-02-01

    This study was conducted to investigate the effects of the ratio of rumen-degradable protein (RDP) to rumen-undegradable protein (RUP) and corn processing method on production performance, nitrogen (N) efficiency, and feeding behavior of high-producing Holstein dairy cows. Twelve multiparous Holstein cows (second parity; milk yield = 48 ± 3 kg/d) were assigned to a replicated 4 × 4 Latin square design with a 2 × 2 factorial arrangement of treatments. Factor 1 was corn processing method [ground corn (GC) or steam flaked corn (SFC) with a flake density of about 390 g/L], and factor 2 was RDP:RUP ratio [low ratio (LR) = 60:40; high ratio (HR) = 65:35] based on crude protein (%). The crude protein concentrations were kept constant across the treatments (16.7% of DM). No significant interactions of main treatment effects occurred for lactation performance data. Cows fed 2 different RDP:RUP ratios exhibited similar dry matter intake (DMI), but those fed SFC showed decreased feed intake compared with those receiving GC (25.1 ± 0.48 vs. 26.2 ± 0.47 kg/d, respectively). Cows fed HR diets produced more milk than did those fed LR diets (44.4 ± 1.05 vs. 43.2 ± 1.05 kg/d, respectively). Milk fat content decreased but milk protein content increased in cows fed SFC compared with those fed GC. Feed efficiency (i.e., milk yield/DMI) was enhanced with increasing ratio of RDP:RUP (1.68 ± 0.04 vs. 1.74 ± 0.04 for LR and HR, respectively). Apparent N efficiency was higher in cows fed HR than in those fed LR (30.4 ± 0.61 vs. 29.2 ± 0.62, respectively). Compared with cows fed the GC-based diet, those receiving SFC exhibited lower values of N intake, N-NH 3 concentration, and fecal N excretion. Cows receiving SFC-based diets spent more time ruminating (min/kg of DMI) than did those fed GC. Although these results showed no interaction effects of RDP:RUP ratio and corn processing method on performance, higher RDP:RUP ratios and ground corn can be effective feeding strategies for feed to lactating cows receiving high-concentrate diets. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  10. Technical Note: Improving proton stopping power ratio determination for a deformable silicone-based 3D dosimeter using dual energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taasti, Vicki Trier, E-mail: victaa@rm.dk; Høye, Ellen Marie; Hansen, David Christoffer

    Purpose: The aim of this study was to investigate whether the stopping power ratio (SPR) of a deformable, silicone-based 3D dosimeter could be determined more accurately using dual energy (DE) CT compared to using conventional methods based on single energy (SE) CT. The use of SECT combined with the stoichiometric calibration method was therefore compared to DECT-based determination. Methods: The SPR of the dosimeter was estimated based on its Hounsfield units (HUs) in both a SECT image and a DECT image set. The stoichiometric calibration method was used for converting the HU in the SECT image to a SPR valuemore » for the dosimeter while two published SPR calibration methods for dual energy were applied on the DECT images. Finally, the SPR of the dosimeter was measured in a 60 MeV proton by quantifying the range difference with and without the dosimeter in the beam path. Results: The SPR determined from SECT and the stoichiometric method was 1.10, compared to 1.01 with both DECT calibration methods. The measured SPR for the dosimeter material was 0.97. Conclusions: The SPR of the dosimeter was overestimated by 13% using the stoichiometric method and by 3% when using DECT. If the stoichiometric method should be applied for the dosimeter, the HU of the dosimeter must be manually changed in the treatment planning system in order to give a correct SPR estimate. Using a wrong SPR value will cause differences between the calculated and the delivered treatment plans.« less

  11. Determination of 13C/12C Isotope Ratio in Alcohols of Different Origin by 1н Nuclei NMR-Spectroscopy

    NASA Astrophysics Data System (ADS)

    Dzhimak, S. S.; Basov, A. A.; Buzko, V. Yu.; Kopytov, G. F.; Kashaev, D. V.; Shashkov, D. I.; Shlapakov, M. S.; Baryshev, M. G.

    2017-02-01

    A new express method of indirect assessment of 13C/12C isotope ratio on 1H nuclei is developed to verify the authenticity of ethanol origin in alcohol-water-based fluids and assess the facts of various alcoholic beverages falsification. It is established that in water-based alcohol-containing systems, side satellites for the signals of ethanol methyl and methylene protons in the NMR spectra on 1H nuclei, correspond to the protons associated with 13C nuclei. There is a direct correlation between the intensities of the signals of ethanol methyl and methylene protons' 1H- NMR and their side satellites, therefore, the data obtained can be used to assess 13C/12C isotope ratio in water-based alcohol-containing systems. The analysis of integrated intensities of main and satellite signals of methyl and methylene protons of ethanol obtained by NMR on 1H nuclei makes it possible to differentiate between ethanol of synthetic and natural origin. Furthermore, the method proposed made it possible to differentiate between wheat and corn ethanol.

  12. A pulse-shape discrimination method for improving Gamma-ray spectrometry based on a new digital shaping filter

    NASA Astrophysics Data System (ADS)

    Qin, Zhang-jian; Chen, Chuan; Luo, Jun-song; Xie, Xing-hong; Ge, Liang-quan; Wu, Qi-fan

    2018-04-01

    It is a usual practice for improving spectrum quality by the mean of designing a good shaping filter to improve signal-noise ratio in development of nuclear spectroscopy. Another method is proposed in the paper based on discriminating pulse-shape and discarding the bad pulse whose shape is distorted as a result of abnormal noise, unusual ballistic deficit or bad pulse pile-up. An Exponentially Decaying Pulse (EDP) generated in nuclear particle detectors can be transformed into a Mexican Hat Wavelet Pulse (MHWP) and the derivation process of the transform is given. After the transform is performed, the baseline drift is removed in the new MHWP. Moreover, the MHWP-shape can be discriminated with the three parameters: the time difference between the two minima of the MHWP, and the two ratios which are from the amplitude of the two minima respectively divided by the amplitude of the maximum in the MHWP. A new type of nuclear spectroscopy was implemented based on the new digital shaping filter and the Gamma-ray spectra were acquired with a variety of pulse-shape discrimination levels. It had manifested that the energy resolution and the peak-Compton ratio were both improved after the pulse-shape discrimination method was used.

  13. Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio

    NASA Astrophysics Data System (ADS)

    Nababan, A. A.; Sitompul, O. S.; Tulus

    2018-04-01

    K- Nearest Neighbor (KNN) is a good classifier, but from several studies, the result performance accuracy of KNN still lower than other methods. One of the causes of the low accuracy produced, because each attribute has the same effect on the classification process, while some less relevant characteristics lead to miss-classification of the class assignment for new data. In this research, we proposed Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio as a parameter to see the correlation between each attribute in the data and the Gain Ratio also will be used as the basis for weighting each attribute of the dataset. The accuracy of results is compared to the accuracy acquired from the original KNN method using 10-fold Cross-Validation with several datasets from the UCI Machine Learning repository and KEEL-Dataset Repository, such as abalone, glass identification, haberman, hayes-roth and water quality status. Based on the result of the test, the proposed method was able to increase the classification accuracy of KNN, where the highest difference of accuracy obtained hayes-roth dataset is worth 12.73%, and the lowest difference of accuracy obtained in the abalone dataset of 0.07%. The average result of the accuracy of all dataset increases the accuracy by 5.33%.

  14. Effects of aspect ratio on the phase diagram of spheroidal particles

    NASA Astrophysics Data System (ADS)

    Kutlu, Songul; Haaga, Jason; Rickman, Jeffrey; Gunton, James

    Ellipsoidal particles occur in both colloidal and protein science. Models of protein phase transitions based on interacting spheroidal particles can often be more realistic than those based on spherical molecules. One of the interesting questions is how the aspect ratio of spheroidal particles affects the phase diagram. Some results have been obtained in an earlier study by Odriozola (J. Chem. Phys. 136:134505 (2012)). In this poster we present results for the phase diagram of hard spheroids interacting via a quasi-square-well potential, for different aspect ratios. These results are obtained from Monte Carlo simulations using the replica exchange method. We find that the phase diagram, including the crystal phase transition, is sensitive to the choice of aspect ratio. G. Harold and Leila Y. Mathers Foundation.

  15. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  16. Novel Modulation Method for Multidirectional Matrix Converter

    PubMed Central

    Misron, Norhisam; Aris, Ishak Bin; Yamada, Hiroaki

    2014-01-01

    This study presents a new modulation method for multidirectional matrix converter (MDMC), based on the direct duty ratio pulse width modulation (DDPWM). In this study, a new structure of MDMC has been proposed to control the power flow direction through the stand-alone battery based system and hybrid vehicle. The modulation method acts based on the average voltage over one switching period concept. Therefore, in order to determine the duty ratio for each switch, the instantaneous input voltages are captured and compared with triangular waveform continuously. By selecting the proper switching pattern and changing the slope of the carriers, the sinusoidal input current can be synthesized with high power factor and desired output voltage. The proposed system increases the discharging time of the battery by injecting the power to the system from the generator and battery at the same time. Thus, it makes the battery life longer and saves more energy. This paper also derived necessary equation for proposed modulation method as well as detail of analysis and modulation algorithm. The theoretical and modulation concepts presented have been verified in MATLAB simulation. PMID:25298969

  17. An Interactive Software for Conceptual Wing Flutter Analysis and Parametric Study

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1996-01-01

    An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well-defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed for Macintosh or IBM compatible personal computers, on MathCad application software with integrated documentation, graphics, data base and symbolic mathematics. The analysis method was based on non-dimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The parametric plots were compiled in a Vought Corporation report from a vast data base of past experiments and wind-tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended-Wing-Body concept, proposed by McDonnell Douglas Corp. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.

  18. MEDIAN-BASED INCREMENTAL COST-EFFECTIVENESS RATIOS WITH CENSORED DATA

    PubMed Central

    Bang, Heejung; Zhao, Hongwei

    2016-01-01

    Cost-effectiveness is an essential part of treatment evaluation, in addition to effectiveness. In the cost-effectiveness analysis, a measure called the incremental cost-effectiveness ratio (ICER) is widely utilized, and the mean cost and the mean (quality-adjusted) life years have served as norms to summarize cost and effectiveness for a study population. Recently, the median-based ICER was proposed for complementary or sensitivity analysis purposes. In this paper, we extend this method when some data are censored. PMID:26010599

  19. Short communication: Semiquantitative assessment of 99mTc-EDDA/HYNIC-TOC scintigraphy in differentiation of solitary pulmonary nodules--a complementary role to visual analysis.

    PubMed

    Płachcińska, Anna; Mikołajczak, Renata; Kozak, Józef; Rzeszutek, Katarzyna; Kuśmierek, Jacek

    2006-02-01

    The aim of this study was the assessment of a value of a semiquantitative analysis of scintigrams obtained with (99m)Tc-EDDA/HYNIC-TOC as a radiopharmaceutical (RPH) in differential diagnosis of solitary pulmonary nodules (SPNs), as a method complementary to visual evaluation of scintigrams. Scintigraphic images of 59 patients (33 males and 26 females between 34 and 78 years of age, mean value, 57) with SPN on chest radiographs (39 malignant and 20 benign) were retrospectively assessed semiquantitatively. Visual scintigram analysis was made earlier, prospectively. Nodule diameters ranged from 1 to 4 (mean 2.2) cm. A single photon emission computed tomography (SPECT) acquisition was performed at 2-4 hours after administration of 740 to 925 MBq of a RPH. Verification of scintigraphic results was based on a pathological examination of tumor samples (histopathology or cytology) and, in some cases, on bacteriological studies. As an additional criterion for tumor benignity, its stable size in a time interval not shorter than 3 years was accepted. A simple, semiquantitative method for assessment of radiopharmaceutical uptake in SPNs was used, based on "count sample" taken from tumor center (T) in relation to radiopharmaceutical concentration in the background (B) measured in the contralateral lung. A criterion for optimal differentiation between malignant and benign nodules (T/B ratio threshold value) was introduced, based on a receiver operating characteristic (ROC) curve. Additionally, a value of T/B ratio was searched for, excluding tumor benignity with high probability. Visual analysis of scintigrams revealed enhanced uptake of RPH at 36 of 39 (92%) sites, corresponding to locations of malignant nodules (including 34 of 35-97% cases of lung cancer). In 13 of 20 (65%) benign nodules, true negative results were obtained. Accuracy of the method equalled 83%. Optimal differentiation between malignant and benign nodules was found for a value of a T/B ratio amounting to 2. The semiquantitative method gave true positive results in 35 of 39 (90%) malignant nodules (as in visual method in 34 of 35-97% cases of lung cancer). True negative results were obtained in 17 of 20 (85%) benign cases. Accuracy of the method reached 88%. A T/B ratio exceeding 4 excluded tumor benignity with high probability. A simple method, of quantitatively assessing 99mTc-EDDA/HYNICTOC uptake in solitary pulmonary nodules by means of a T/B ratio can play a role that is complementary to the visual evaluation of scintigrams. It improves low specificity of the method in the detection of malignant nodules, without significant reduction of its sensitivity and provides a T/B ratio excluding tumor benignity with high probability.

  20. UV-Visible Spectroscopy-Based Quantification of Unlabeled DNA Bound to Gold Nanoparticles.

    PubMed

    Baldock, Brandi L; Hutchison, James E

    2016-12-20

    DNA-functionalized gold nanoparticles have been increasingly applied as sensitive and selective analytical probes and biosensors. The DNA ligands bound to a nanoparticle dictate its reactivity, making it essential to know the type and number of DNA strands bound to the nanoparticle surface. Existing methods used to determine the number of DNA strands per gold nanoparticle (AuNP) require that the sequences be fluorophore-labeled, which may affect the DNA surface coverage and reactivity of the nanoparticle and/or require specialized equipment and other fluorophore-containing reagents. We report a UV-visible-based method to conveniently and inexpensively determine the number of DNA strands attached to AuNPs of different core sizes. When this method is used in tandem with a fluorescence dye assay, it is possible to determine the ratio of two unlabeled sequences of different lengths bound to AuNPs. Two sizes of citrate-stabilized AuNPs (5 and 12 nm) were functionalized with mixtures of short (5 base) and long (32 base) disulfide-terminated DNA sequences, and the ratios of sequences bound to the AuNPs were determined using the new method. The long DNA sequence was present as a lower proportion of the ligand shell than in the ligand exchange mixture, suggesting it had a lower propensity to bind the AuNPs than the short DNA sequence. The ratio of DNA sequences bound to the AuNPs was not the same for the large and small AuNPs, which suggests that the radius of curvature had a significant influence on the assembly of DNA strands onto the AuNPs.

  1. Application and validation of superior spectrophotometric methods for simultaneous determination of ternary mixture used for hypertension management.

    PubMed

    Mohamed, Heba M; Lamie, Nesrine T

    2016-02-15

    Telmisartan (TL), Hydrochlorothiazide (HZ) and Amlodipine besylate (AM) are co-formulated together for hypertension management. Three smart, specific and precise spectrophotometric methods were applied and validated for simultaneous determination of the three cited drugs. Method A is the ratio isoabsorptive point and ratio difference in subtracted spectra (RIDSS) which is based on dividing the ternary mixture of the studied drugs by the spectrum of AM to get the division spectrum, from which concentration of AM can be obtained by measuring the amplitude values in the plateau region at 360nm. Then the amplitude value of the plateau region was subtracted from the division spectrum and HZ concentration was obtained by measuring the difference in amplitude values at 278.5 and 306nm (corresponding to zero difference of TL) while the total concentration of HZ and TL in the mixture was measured at their isoabsorptive point in the division spectrum at 278.5nm (Aiso). TL concentration is then obtained by subtraction. Method B; double divisor ratio spectra derivative spectrophotometry (RS-DS) and method C; mean centering of ratio spectra (MCR) spectrophotometric methods. The proposed methods did not require any initial separation steps prior the analysis of the three drugs. A comparative study was done between the three methods regarding their; simplicity, sensitivity and limitations. Specificity was investigated by analyzing the synthetic mixtures containing different ratios of the three studied drugs and their tablets dosage form. Statistical comparison of the obtained results with those found by the official methods was done, differences were non-significant in regard to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for TL, HZ and AM. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Application and validation of superior spectrophotometric methods for simultaneous determination of ternary mixture used for hypertension management

    NASA Astrophysics Data System (ADS)

    Mohamed, Heba M.; Lamie, Nesrine T.

    2016-02-01

    Telmisartan (TL), Hydrochlorothiazide (HZ) and Amlodipine besylate (AM) are co-formulated together for hypertension management. Three smart, specific and precise spectrophotometric methods were applied and validated for simultaneous determination of the three cited drugs. Method A is the ratio isoabsorptive point and ratio difference in subtracted spectra (RIDSS) which is based on dividing the ternary mixture of the studied drugs by the spectrum of AM to get the division spectrum, from which concentration of AM can be obtained by measuring the amplitude values in the plateau region at 360 nm. Then the amplitude value of the plateau region was subtracted from the division spectrum and HZ concentration was obtained by measuring the difference in amplitude values at 278.5 and 306 nm (corresponding to zero difference of TL) while the total concentration of HZ and TL in the mixture was measured at their isoabsorptive point in the division spectrum at 278.5 nm (Aiso). TL concentration is then obtained by subtraction. Method B; double divisor ratio spectra derivative spectrophotometry (RS-DS) and method C; mean centering of ratio spectra (MCR) spectrophotometric methods. The proposed methods did not require any initial separation steps prior the analysis of the three drugs. A comparative study was done between the three methods regarding their; simplicity, sensitivity and limitations. Specificity was investigated by analyzing the synthetic mixtures containing different ratios of the three studied drugs and their tablets dosage form. Statistical comparison of the obtained results with those found by the official methods was done, differences were non-significant in regard to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for TL, HZ and AM.

  3. Tile-Based Fisher-Ratio Software for Improved Feature Selection Analysis of Comprehensive Two-Dimensional Gas Chromatography Time-of-Flight Mass Spectrometry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marney, Luke C.; Siegler, William C.; Parsons, Brendon A.

    Two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC – TOFMS) is a highly capable instrumental platform that produces complex and information-rich multi-dimensional chemical data. The complex data can be overwhelming, especially when many samples (of various sample classes) are analyzed with multiple injections for each sample. Thus, the data must be analyzed in such a way to extract the most meaningful information. The pixel-based and peak table-based algorithmic use of Fisher ratios has been used successfully in the past to reduce the multi-dimensional data down to those chemical compounds that are changing between classes relative tomore » those that are not (i.e., chemical feature selection). We report on the initial development of a computationally fast novel tile-based Fisher-ratio software that addresses challenges due to 2D retention time misalignment without explicitly aligning the data, which is a problem for both pixel-based and peak table- based methods. Concurrently, the tile-based Fisher-ratio software maximizes the sensitivity contrast of true positives against a background of potential false positives and noise. To study this software, eight compounds, plus one internal standard, were spiked into diesel at various concentrations. The tile-based F-ratio software was able to discover all spiked analytes, within the complex diesel sample matrix with thousands of potential false positives, in each possible concentration comparison, even at the lowest absolute spiked analyte concentration ratio of 1.06.« less

  4. Measurement of formaldehyde in clean air

    NASA Astrophysics Data System (ADS)

    Neitzert, Volker; Seiler, Wolfgang

    1981-01-01

    A method for the measurement of small amounts of formaldehyde in air has been developed. The method is based on the derivatization of HCHO with 2.4-Dinitrophenylhydrazine, forming 2.4-Dinitrophenylhydrazone, measured with GC-ECD-technique. HCHO is preconcentrated using a cryogenic sampling technique. The detection limit is 0.05 ppbv for a sampling volume of 200 liter. The method has been applied for measurements in continental and marine air masses showing HCHO mixing ratios of 0.4 - 5.0 ppbv and 0.2 - 1.0 ppbv, respectively. HCHO mixing ratios show diurnal variations with maximum values during the early afternoon and minimum values during the early morning. In continental air, HCHO mixing ratios are positively correlated with CO and SO2, indicating anthropogenic HCHO sources which are estimated to be 6-11 × 1012g/year-1 on a global scale.

  5. Change-in-ratio

    USGS Publications Warehouse

    Udevitz, Mark S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.

    2002-01-01

    Change-in-ratio (CIR) methods are used to estimate parameters for ecological populations subject to differential removals from population subclasses. Subclasses can be defined according to criteria such as sex, age, or size of individuals. Removals are generally in the form of closely monitored sport or commercial harvests. Estimation is based on observed changes in subclass proportions caused by the removals.

  6. Change-in-ratio

    USGS Publications Warehouse

    Udevitz, Mark S.

    2014-01-01

    Change-in-ratio (CIR) methods are used to estimate parameters for ecological populations subject to differential removals from population subclasses. Subclasses can be defined according to criteria such as sex, age, or size of individuals. Removals are generally in the form of closely monitored sport or commercial harvests. Estimation is based on observed changes in subclass proportions caused by the removals.

  7. Ionospheric Delay Compensation Using a Scale Factor Based on an Altitude of a Receiver

    NASA Technical Reports Server (NTRS)

    Zhao, Hui (Inventor); Savoy, John (Inventor)

    2014-01-01

    In one embodiment, a method for ionospheric delay compensation is provided. The method includes determining an ionospheric delay based on a signal having propagated from the navigation satellite to a location below the ionosphere. A scale factor can be applied to the ionospheric delay, wherein the scale factor corresponds to a ratio of an ionospheric delay in the vertical direction based on an altitude of the satellite navigation system receiver. Compensation can be applied based on the ionospheric delay.

  8. Evapotranspiration from areas of native vegetation in west-central Florida

    USGS Publications Warehouse

    Bidlake, W.R.; Woodham, W.M.; Lopez, M.A.

    1993-01-01

    A study was made to examine the suitability of three different micrometeorological methods for estimating evapotranspiration from selected areas of native vegetation in west-central Florida and to estimate annual evapotranspiration from those areas. Evapotranspiration was estimated using the energy- balance Bowen ratio and eddy correlation methods. Potential evapotranspiration was computed using the Penman equation. The energy-balance Bowen ratio method was used to estimate diurnal evapotrans- piration at unforested sites and yielded reasonable results; however, measurements indicated that the magnitudes of air temperature and vapor-pressure gradients above the forested sites were too small to obtain reliable evapotranspiration measurements with the energy balance Bowen ratio system. Analysis of the surface energy-balance indicated that sensible and latent heat fluxes computed using standard eddy correlation computation methods did not adequately account for available energy. Eddy correlation data were combined with the equation for the surface energy balance to yield two additional estimates of evapotranspiration. Daily potential evapotranspiration and evapotranspira- tion estimated using the energy-balance Bowen ratio method were not correlated at a unforested, dry prairie site, but they were correlated at a marsh site. Estimates of annual evapotranspiration for sites within the four vegetation types, which were based on energy-balance Bowen ratio and eddy correlation measurements, were 1,010 millimeters for dry prairie sites, 990 millimeters for marsh sites, 1,060 millimeters for pine flatwood sites, and 970 millimeters for a cypress swamp site.

  9. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    PubMed

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Incremental Predictive Value of Serum AST-to-ALT Ratio for Incident Metabolic Syndrome: The ARIRANG Study

    PubMed Central

    Ahn, Song Vogue; Baik, Soon Koo; Cho, Youn zoo; Koh, Sang Baek; Huh, Ji Hye; Chang, Yoosoo; Sung, Ki-Chul; Kim, Jang Young

    2016-01-01

    Aims The ratio of aspartate aminotransferase (AST) to alanine aminotransferase (ALT) is of great interest as a possible novel marker of metabolic syndrome. However, longitudinal studies emphasizing the incremental predictive value of the AST-to-ALT ratio in diagnosing individuals at higher risk of developing metabolic syndrome are very scarce. Therefore, our study aimed to evaluate the AST-to-ALT ratio as an incremental predictor of new onset metabolic syndrome in a population-based cohort study. Material and Methods The population-based cohort study included 2276 adults (903 men and 1373 women) aged 40–70 years, who participated from 2005–2008 (baseline) without metabolic syndrome and were followed up from 2008–2011. Metabolic syndrome was defined according to the harmonized definition of metabolic syndrome. Serum concentrations of AST and ALT were determined by enzymatic methods. Results During an average follow-up period of 2.6-years, 395 individuals (17.4%) developed metabolic syndrome. In a multivariable adjusted model, the odds ratio (95% confidence interval) for new onset of metabolic syndrome, comparing the fourth quartile to the first quartile of the AST-to-ALT ratio, was 0.598 (0.422–0.853). The AST-to-ALT ratio also improved the area under the receiver operating characteristic curve (AUC) for predicting new cases of metabolic syndrome (0.715 vs. 0.732, P = 0.004). The net reclassification improvement of prediction models including the AST-to-ALT ratio was 0.23 (95% CI: 0.124–0.337, P<0.001), and the integrated discrimination improvement was 0.0094 (95% CI: 0.0046–0.0143, P<0.001). Conclusions The AST-to-ALT ratio independently predicted the future development of metabolic syndrome and had incremental predictive value for incident metabolic syndrome. PMID:27560931

  11. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  12. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  13. Compact point-detection fluorescence spectroscopy system for quantifying intrinsic fluorescence redox ratio in brain cancer diagnostics

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Grant, Gerald; Li, Jianjun; Zhang, Yan; Hu, Fangyao; Li, Shuqin; Wilson, Christy; Chen, Kui; Bigner, Darell; Vo-Dinh, Tuan

    2011-03-01

    We report the development of a compact point-detection fluorescence spectroscopy system and two data analysis methods to quantify the intrinsic fluorescence redox ratio and diagnose brain cancer in an orthotopic brain tumor rat model. Our system employs one compact cw diode laser (407 nm) to excite two primary endogenous fluorophores, reduced nicotinamide adenine dinucleotide, and flavin adenine dinucleotide. The spectra were first analyzed using a spectral filtering modulation method developed previously to derive the intrinsic fluorescence redox ratio, which has the advantages of insensitivty to optical coupling and rapid data acquisition and analysis. This method represents a convenient and rapid alternative for achieving intrinsic fluorescence-based redox measurements as compared to those complicated model-based methods. It is worth noting that the method can also extract total hemoglobin concentration at the same time but only if the emission path length of fluorescence light, which depends on the illumination and collection geometry of the optical probe, is long enough so that the effect of absorption on fluorescence intensity due to hemoglobin is significant. Then a multivariate method was used to statistically classify normal tissues and tumors. Although the first method offers quantitative tissue metabolism information, the second method provides high overall classification accuracy. The two methods provide complementary capabilities for understanding cancer development and noninvasively diagnosing brain cancer. The results of our study suggest that this portable system can be potentially used to demarcate the elusive boundary between a brain tumor and the surrounding normal tissue during surgical resection.

  14. Simultaneous determination of binary mixture of amlodipine besylate and atenolol based on dual wavelengths

    NASA Astrophysics Data System (ADS)

    Lamie, Nesrine T.

    2015-10-01

    Four, accurate, precise, and sensitive spectrophotometric methods are developed for simultaneous determination of a binary mixture of amlodipine besylate (AM) and atenolol (AT). AM is determined at its λmax 360 nm (0D), while atenolol can be determined by four different methods. Method (A) is absorption factor (AF). Method (B) is the new ratio difference method (RD) which measures the difference in amplitudes between 210 and 226 nm. Method (C) is novel constant center spectrophotometric method (CC). Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results is assessed by applying standard addition technique. The results obtained are found to agree statistically with those obtained by official methods, showing no significant difference with respect to accuracy and precision.

  15. Assessment of chloroethene degradation rates based on ratios of daughter/parent compounds in groundwater plumes

    NASA Astrophysics Data System (ADS)

    Höhener, Patrick

    2014-05-01

    Chlorinated solvent spills at industrial and urban sites create groundwater plumes where tetrachloro- and trichloroethene may degrade to their daughter compounds, dichloroethenes, vinyl chloride and ethane. The assessment of degradation and natural attenuation at such sites may be based on the analysis and inverse modelling of concentration data, on the calculation of mass fluxes in transsects, and/or on the analysis of stable isotope ratios in the ethenes. Relatively few work has investigated the possibility of using ratio of concentrations for gaining information on degradation rates. The use of ratios bears the advantage that dilution of a single sample with contaminant-free water does not matter. It will be shown that molar ratios of daughter to parent compounds measured along a plume streamline are a rapid and robust mean of determining whether degradation rates increase or decrease along the degradation chain, and allow furthermore a quantitation of the relative magnitude of degradation rates compared to the rate of the parent compound. Furthermore, ratios of concentration will become constant in zones where degradation is absent, and this allows to sketching the extension of actively degrading zones. The assessment is possible for pure sources and also for mixed sources. A quantification method is proposed in order to estimate first-order degradation rates in zones of constant degradation activity. This quantification method includes corrections that are needed due to longitudinal and transversal dispersivity. The method was tested on a number of real field sites from literature. At the majority of these sites, the first-order degradation rates were decreasing along the degradation chain from tetrachloroethene to vinyl chloride, meaning that the latter was often reaching important concentrations. This is bad news for site owners due to the increased toxicity of vinyl chloride compared to its parent compounds.

  16. Synthesis of High-Frequency Ground Motion Using Information Extracted from Low-Frequency Ground Motion

    NASA Astrophysics Data System (ADS)

    Iwaki, A.; Fujiwara, H.

    2012-12-01

    Broadband ground motion computations of scenario earthquakes are often based on hybrid methods that are the combinations of deterministic approach in lower frequency band and stochastic approach in higher frequency band. Typical computation methods for low-frequency and high-frequency (LF and HF, respectively) ground motions are the numerical simulations, such as finite-difference and finite-element methods based on three-dimensional velocity structure model, and the stochastic Green's function method, respectively. In such hybrid methods, LF and HF wave fields are generated through two different methods that are completely independent of each other, and are combined at the matching frequency. However, LF and HF wave fields are essentially not independent as long as they are from the same event. In this study, we focus on the relation among acceleration envelopes at different frequency bands, and attempt to synthesize HF ground motion using the information extracted from LF ground motion, aiming to propose a new method for broad-band strong motion prediction. Our study area is Kanto area, Japan. We use the K-NET and KiK-net surface acceleration data and compute RMS envelope at four frequency bands: 0.5-1.0 Hz, 1.0-2.0 Hz, 2.0-4.0 Hz, .0-8.0 Hz, and 8.0-16.0 Hz. Taking the ratio of the envelopes of adjacent bands, we find that the envelope ratios have stable shapes at each site. The empirical envelope-ratio characteristics are combined with low-frequency envelope of the target earthquake to synthesize HF ground motion. We have applied the method to M5-class earthquakes and a M7 target earthquake that occurred in the vicinity of Kanto area, and successfully reproduced the observed HF ground motion of the target earthquake. The method can be applied to a broad band ground motion simulation for a scenario earthquake by combining numerically-computed low-frequency (~1 Hz) ground motion with the empirical envelope ratio characteristics to generate broadband ground motion. The strengths of the proposed method are that: 1) it is based on observed ground motion characteristics, 2) it takes full advantage of precise velocity structure model, and 3) it is simple and easy to apply.

  17. Method for measuring changes in the atmospheric O2/N2 ratio by a gas chromatograph equipped with a thermal conductivity detector

    NASA Astrophysics Data System (ADS)

    Tohjima, Yasunori

    2000-06-01

    We present a method for measuring changes in the atmospheric O2/N2 ratio based on data from a gas chromatograph (GC) equipped with a thermal conductivity detector (TCD). In this method, O2 and N2 in an air sample are separated on a column filled with molecular sieve 5A with H2 carrier gas. Since the separated O2 includes Ar, which has a retention time similar to that of O2, the (O2+Ar)/N2 ratio is actually measured. The change in the measured (O2+Ar)/N2 ratio can be easily converted to that in the O2/N2 ratio with a very small error based on the fact that the atmospheric Ar/N2 ratio is almost constant. The improvements to achieve the high-precision measurement include stabilization of the pressure at the GC column head and at the outlets of the TCD and the sample loop. Additionally, the precision is improved statistically by repeating alternate analyses of sample and a reference gas. The standard deviation of the replicate cycles of reference and sample analyses is about 18 per meg (corresponding to 3.8 parts per million (ppm) O2 in air). This means that the standard error is about 7 per meg (1.5 ppm O2 in air) for seven cycles of alternate analyses, which takes about 70 min. The response of this method is likely to have a 2% nonlinearity. Ambient air samples are collected under pressure in glass flasks equipped with two stopcocks sealed by Viton O-rings at both ends. Pressure depletion in the flask during the O2/N2 measurement does not cause any detectable change in the O2/N2 ratio, but the O2/N2 ratio in the flask was found to gradually decrease during the storage period. We also present preliminary results from air samples collected at Hateruma Island (latitude 24°03'N, longitude 123°49') from July 1997 through March 1999. The observed O2/N2 ratios clearly show a seasonal variation, increasing in spring and summer and decreasing in autumn and winter.

  18. Detection of testosterone administration based on the carbon isotope ratio profiling of endogenous steroids: international reference populations of professional soccer players.

    PubMed

    Strahm, E; Emery, C; Saugy, M; Dvorak, J; Saudan, C

    2009-12-01

    The determination of the carbon isotope ratio in androgen metabolites has been previously shown to be a reliable, direct method to detect testosterone misuse in the context of antidoping testing. Here, the variability in the 13C/12C ratios in urinary steroids in a widely heterogeneous cohort of professional soccer players residing in different countries (Argentina, Italy, Japan, South Africa, Switzerland and Uganda) is examined. Carbon isotope ratios of selected androgens in urine specimens were determined using gas chromatography/combustion/isotope ratio mass spectrometry (GC-C-IRMS). Urinary steroids in Italian and Swiss populations were found to be enriched in 13C relative to other groups, reflecting higher consumption of C3 plants in these two countries. Importantly, detection criteria based on the difference in the carbon isotope ratio of androsterone and pregnanediol for each population were found to be well below the established threshold value for positive cases. The results obtained with the tested diet groups highlight the importance of adapting the criteria if one wishes to increase the sensitivity of exogenous testosterone detection. In addition, confirmatory tests might be rendered more efficient by combining isotope ratio mass spectrometry with refined interpretation criteria for positivity and subject-based profiling of steroids.

  19. Determination of Flux rope axis for GS reconstruction

    NASA Astrophysics Data System (ADS)

    Tian, A.; Shi, Q.; Bai, S.; Zhang, S.

    2016-12-01

    It is important to give the axis direction and velocity of a magnetic flux ropes before employing Grad-Shafranov reconstruction. The ability of single-satellite based MVA (MVAB and CMVA) and multi-satellite based MDD methods in finding the invariant axis are tested by a model. The choice of principal axis given by MVA along the aimed direction is dependent on the distance of the path from the flux-rope axis. The MDD results are influenced by the ratio of Noise level/separation to the gradient of the structure. An accurate axial direction will be obtained when the ratio is less than 1. By a model, an example with failed HT method is displayed indicating the importance of the STD method in obtaining the velocity of such a structure. The applicability of trial and error method by Hu and Sonnerup(2012) was also used and discussed. Finally, all above methods were applied to a flux-rope observed by Cluster. It shows that the GS method can be easily carried out in the case of clearly known dimensionality and velocity.

  20. Detection of Adulterated Vegetable Oils Containing Waste Cooking Oils Based on the Contents and Ratios of Cholesterol, β-Sitosterol, and Campesterol by Gas Chromatography/Mass Spectrometry.

    PubMed

    Zhao, Haixiang; Wang, Yongli; Xu, Xiuli; Ren, Heling; Li, Li; Xiang, Li; Zhong, Weike

    2015-01-01

    A simple and accurate authentication method for the detection of adulterated vegetable oils that contain waste cooking oil (WCO) was developed. This method is based on the determination of cholesterol, β-sitosterol, and campesterol in vegetable oils and WCO by GC/MS without any derivatization. A total of 148 samples involving 12 types of vegetable oil and WCO were analyzed. According to the results, the contents and ratios of cholesterol, β-sitosterol, and campesterol were found to be criteria for detecting vegetable oils adulterated with WCO. This method could accurately detect adulterated vegetable oils containing 5% refined WCO. The developed method has been successfully applied to multilaboratory analysis of 81 oil samples. Seventy-five samples were analyzed correctly, and only six adulterated samples could not be detected. This method could not yet be used for detection of vegetable oils adulterated with WCO that are used for frying non-animal foods. It provides a quick method for detecting adulterated edible vegetable oils containing WCO.

  1. Improving Stiffness-to-weight Ratio of Spot-welded Structures based upon Nonlinear Finite Element Modelling

    NASA Astrophysics Data System (ADS)

    Zhang, Shengyong

    2017-07-01

    Spot welding has been widely used for vehicle body construction due to its advantages of high speed and adaptability for automation. An effort to increase the stiffness-to-weight ratio of spot-welded structures is investigated based upon nonlinear finite element analysis. Topology optimization is conducted for reducing weight in the overlapping regions by choosing an appropriate topology. Three spot-welded models (lap, doubt-hat and T-shape) that approximate “typical” vehicle body components are studied for validating and illustrating the proposed method. It is concluded that removing underutilized material from overlapping regions can result in a significant increase in structural stiffness-to-weight ratio.

  2. Design of a 50/50 splitting ratio non-polarizing beam splitter based on the modal method with fused-silica transmission gratings

    NASA Astrophysics Data System (ADS)

    Zhao, Huajun; Yuan, Dairong; Ming, Hai

    2011-04-01

    The optical design of a beam splitter that has a 50/50 splitting ratio regardless of the polarization is presented. The non-polarizing beam splitter (NPBS) is based on the fused-silica rectangular transmission gratings with high intensity tolerance. The modal method has been used to estimate the effective index of the modes excited in the grating region for TE and TM polarizations. If a phase difference equals an odd multiples of π/2 for the first two modes (i.e. modes 0 and 1), the incident light will be diffracted into the 0 and -1 orders with about 50% and 50% diffraction efficiency for TM and TE polarizations, respectively.

  3. Stopping-power and mass energy-absorption coefficient ratios for Solid Water.

    PubMed

    Ho, A K; Paliwal, B R

    1986-01-01

    The AAPM Task Group 21 protocol provides tables of ratios of average restricted stopping powers and ratios of mean energy-absorption coefficients for different materials. These values were based on the work of Cunningham and Schulz. We have calculated these quantities for Solid Water (manufactured by RMI), using the same x-ray spectra and method as that used by Cunningham and Schulz. These values should be useful to people who are using Solid Water for high-energy photon calibration.

  4. Atrial corrected Fourier amplitude ratios for the scintigraphic quantitation of valvar regurgitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dae, M.W.; Botvinick, E.H.; O'Connell, J.W.

    1984-01-01

    Current scintigraphic methods commonly overestimate the degree of valvar regurgitation (VR), and displace normal ratios from unity, owing largely to RA contamination of the RV region of interest in the ''best septal'' LAO projection. The authors developed a method to correct for this overlap, using the Fourier amplitude (AMP) ratio. Amplitude is first ''weighted'' for phase angle using a vectorial sum, to improve assessment in patients (PTS) with contraction abnormalities. RV AMP is then corrected for underestimation by adding the product of mean LAO RA AMP times the difference between RA areas in anterior and LAO projections to the calculatedmore » RV AMP. In 15 PTS with aortic or mitral VR, corrected AMP ratios (CAR) were compared to ratios assessed angiographically and in 12 PTS without VR were compared to uncorrected AMP ratios (UAR), and to stroke volume ratios (SVR) from SV images (SVI), and ED and ES counts data (CT). CAR interobserver agreement was high (R=.97). When VR PTS ranked by CAR as mild (1.3-1.8), moderate (1.9-2.5), or severe (>2.5) were compared to similar chatheterization based ranks, there were no significant differences using the Mann Whitney test for ordinal data. CAR is a simple, objective and reproducible method of quantitating VR. It reduces the error in those without VR, allows sensitive identification of mild VR, and maintains accurate assessment of severe VR.« less

  5. Structural analysis for preliminary design of High Speed Civil Transport (HSCT)

    NASA Technical Reports Server (NTRS)

    Bhatia, Kumar G.

    1992-01-01

    In the preliminary design environment, there is a need for quick evaluation of configuration and material concepts. The simplified beam representations used in the subsonic, high aspect ratio wing platform are not applicable for low aspect ratio configurations typical of supersonic transports. There is a requirement to develop methods for efficient generation of structural arrangement and finite element representation to support multidisciplinary analysis and optimization. In addition, empirical data bases required to validate prediction methods need to be improved for high speed civil transport (HSCT) type configurations.

  6. The fatigue behavior of composite laminates under various mean stresses

    NASA Technical Reports Server (NTRS)

    Rotem, A.

    1991-01-01

    A method is developed for predicting the S-N curve of a composite laminate which is subjected to an arbitrary stress ratio, R (minimum stress/maximum stress). The method is based on the measuring of the S-N behavior of two distinct cases, tension-tension and compression-compression fatigue loadings. Using these parameters, expressions are formulated that estimate the fatigue behavior under any stress ratio loading. Experimental results from the testing of graphite/epoxy laminates, with various structures, are compared with the predictions and show good agreement.

  7. Comparisons of survival predictions using survival risk ratios based on International Classification of Diseases, Ninth Revision and Abbreviated Injury Scale trauma diagnosis codes.

    PubMed

    Clarke, John R; Ragone, Andrew V; Greenwald, Lloyd

    2005-09-01

    We conducted a comparison of methods for predicting survival using survival risk ratios (SRRs), including new comparisons based on International Classification of Diseases, Ninth Revision (ICD-9) versus Abbreviated Injury Scale (AIS) six-digit codes. From the Pennsylvania trauma center's registry, all direct trauma admissions were collected through June 22, 1999. Patients with no comorbid medical diagnoses and both ICD-9 and AIS injury codes were used for comparisons based on a single set of data. SRRs for ICD-9 and then for AIS diagnostic codes were each calculated two ways: from the survival rate of patients with each diagnosis and when each diagnosis was an isolated diagnosis. Probabilities of survival for the cohort were calculated using each set of SRRs by the multiplicative ICISS method and, where appropriate, the minimum SRR method. These prediction sets were then internally validated against actual survival by the Hosmer-Lemeshow goodness-of-fit statistic. The 41,364 patients had 1,224 different ICD-9 injury diagnoses in 32,261 combinations and 1,263 corresponding AIS injury diagnoses in 31,755 combinations, ranging from 1 to 27 injuries per patient. All conventional ICD-9-based combinations of SRRs and methods had better Hosmer-Lemeshow goodness-of-fit statistic fits than their AIS-based counterparts. The minimum SRR method produced better calibration than the multiplicative methods, presumably because it did not magnify inaccuracies in the SRRs that might occur with multiplication. Predictions of survival based on anatomic injury alone can be performed using ICD-9 codes, with no advantage from extra coding of AIS diagnoses. Predictions based on the single worst SRR were closer to actual outcomes than those based on multiplying SRRs.

  8. A method of inferring collision ratio based on maneuverability of own ship under critical collision conditions

    NASA Astrophysics Data System (ADS)

    You, Youngjun; Rhee, Key-Pyo; Ahn, Kyoungsoo

    2013-06-01

    In constructing a collision avoidance system, it is important to determine the time for starting collision avoidance maneuver. Many researchers have attempted to formulate various indices by applying a range of techniques. Among these indices, collision risk obtained by combining Distance to the Closest Point of Approach (DCPA) and Time to the Closest Point of Approach (TCPA) information with fuzzy theory is mostly used. However, the collision risk has a limit, in that membership functions of DCPA and TCPA are empirically determined. In addition, the collision risk is not able to consider several critical collision conditions where the target ship fails to take appropriate actions. It is therefore necessary to design a new concept based on logical approaches. In this paper, a collision ratio is proposed, which is the expected ratio of unavoidable paths to total paths under suitably characterized operation conditions. Total paths are determined by considering categories such as action space and methodology of avoidance. The International Regulations for Preventing Collisions at Sea (1972) and collision avoidance rules (2001) are considered to solve the slower ship's dilemma. Different methods which are based on a constant speed model and simulated speed model are used to calculate the relative positions between own ship and target ship. In the simulated speed model, fuzzy control is applied to determination of command rudder angle. At various encounter situations, the time histories of the collision ratio based on the simulated speed model are compared with those based on the constant speed model.

  9. Design, formulation and evaluation of green tea chewing gum

    PubMed Central

    Aslani, Abolfazl; Ghannadi, Alireza; Khalafi, Zeinab

    2014-01-01

    Background: The main purpose of this study is to design, formulate and evaluate the green tea gums with a suitable taste and quality in order to produce an anti-oxidant chewing gum. Materials and Methods: Fresh green tea leaves obtained from Northern Iran for extraction. Maceration is the extraction method that is used in this study. The contents of caffeine, catechin and flavonoids of the hydro alcoholic extract were measured. Various formulations of the 120 mg green tea extract chewing gums with different sweeteners, flavoring agents and various gum bases were prepared afterward release pattern, content uniformity, organoleptic results and other properties were characterized. Results: The contents of caffeine, catechin and flavonoid of the hydro alcoholic extraction were 207.32 mg/g, 130.00 mg/g and 200.82 mg/g, respectively. Release pattern of green tea chewing gum with different gum base ratios and various sweeteners in phosphate buffer were prepared. A total of 60 persons who were 20-30 years of age, participated in our panel test for organoleptic properties such as taste, stiffness, stickiness, etc., Acceptable gum was the one with the same ratio of the used rubber bases. Cinnamon selected as the preferred taste by volunteers. Combination of aspartame, sugar and maltitol has appropriate taste. The effect of various sweetener on release pattern was negligible, on the other hand rubber bases ratio variation, changed the release pattern obviously. Conclusion: The green tea chewing gum with sugar, maltitol and aspartame sweeteners and cinnamon flavor, using the same rubber bases ratio may be a desirable antioxidant product. PMID:25161989

  10. Influence of signal intensity non-uniformity on brain volumetry using an atlas-based method.

    PubMed

    Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2012-01-01

    Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.

  11. Deep Eutectic Solvent-Based Microwave-Assisted Method for Extraction of Hydrophilic and Hydrophobic Components from Radix Salviae miltiorrhizae.

    PubMed

    Chen, Jue; Liu, Mengjun; Wang, Qi; Du, Huizhi; Zhang, Liwei

    2016-10-17

    Deep eutectic solvents (DESs) have attracted significant attention as a promising green media. In this work, twenty-five kinds of benign choline chloride-based DESs with microwave-assisted methods were applied to quickly extract active components from Radix Salviae miltiorrhizae . The extraction factors, including temperature, time, power of microwave, and solid/liquid ratio, were investigated systematically by response surface methodology. The hydrophilic and hydrophobic ingredients were extracted simultaneously under the optimized conditions: 20 vol% of water in choline chloride/1,2-propanediol (1:1, molar ratio) as solvent, microwave power of 800 W, temperature at 70 °C, time at 11.11 min, and solid/liquid ratio of 0.007 g·mL -1 . The extraction yield was comparable to, or even better than, conventional methods with organic solvents. The microstructure alteration of samples before and after extraction was also investigated. The method validation was tested as the linearity of analytes ( r ² > 0.9997 over two orders of magnitude), precision (intra-day relative standard deviation (RSD) < 2.49 and inter-day RSD < 2.96), and accuracy (recoveries ranging from 95.04% to 99.93%). The proposed DESs combined with the microwave-assisted method provided a prominent advantage for fast and efficient extraction of active components, and DESs could be extended as solvents to extract and analyze complex environmental and pharmaceutical samples.

  12. Compression of next-generation sequencing quality scores using memetic algorithm

    PubMed Central

    2014-01-01

    Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747

  13. Utilizing gamma band to improve mental task based brain-computer interface design.

    PubMed

    Palaniappan, Ramaswamy

    2006-09-01

    A common method for designing brain-computer Interface (BCI) is to use electroencephalogram (EEG) signals extracted during mental tasks. In these BCI designs, features from EEG such as power and asymmetry ratios from delta, theta, alpha, and beta bands have been used in classifying different mental tasks. In this paper, the performance of the mental task based BCI design is improved by using spectral power and asymmetry ratios from gamma (24-37 Hz) band in addition to the lower frequency bands. In the experimental study, EEG signals extracted during five mental tasks from four subjects were used. Elman neural network (ENN) trained by the resilient backpropagation algorithm was used to classify the power and asymmetry ratios from EEG into different combinations of two mental tasks. The results indicated that ((1) the classification performance and training time of the BCI design were improved through the use of additional gamma band features; (2) classification performances were nearly invariant to the number of ENN hidden units or feature extraction method.

  14. Visual attention distracter insertion for improved EEG rapid serial visual presentation (RSVP) target stimuli detection

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Martin, Kevin

    2017-05-01

    This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).

  15. Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1994-01-01

    The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.

  16. [Quantitative Analysis of Heavy Metals in Water with LIBS Based on Signal-to-Background Ratio].

    PubMed

    Hu, Li; Zhao, Nan-jing; Liu, Wen-qing; Fang, Li; Zhang, Da-hai; Wang, Yin; Meng, De Shuo; Yu, Yang; Ma, Ming-jun

    2015-07-01

    There are many influence factors in the precision and accuracy of the quantitative analysis with LIBS technology. According to approximately the same characteristics trend of background spectrum and characteristic spectrum along with the change of temperature through in-depth analysis, signal-to-background ratio (S/B) measurement and regression analysis could compensate the spectral line intensity changes caused by system parameters such as laser power, spectral efficiency of receiving. Because the measurement dates were limited and nonlinear, we used support vector machine (SVM) for regression algorithm. The experimental results showed that the method could improve the stability and the accuracy of quantitative analysis of LIBS, and the relative standard deviation and average relative error of test set respectively were 4.7% and 9.5%. Data fitting method based on signal-to-background ratio(S/B) is Less susceptible to matrix elements and background spectrum etc, and provides data processing reference for real-time online LIBS quantitative analysis technology.

  17. Optimum wall impedance for spinning modes: A correlation with mode cut-off ratio

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1978-01-01

    A correlating equation relating the optimum acoustic impedance for the wall lining of a circular duct to the acoustic mode cut-off ratio, is presented. The optimum impedance was correlated with cut-off ratio because the cut-off ratio appears to be the fundamental parameter governing the propagation of sound in the duct. Modes with similar cut-off ratios respond in a similar way to the acoustic liner. The correlation is a semi-empirical expression developed from an empirical modification of an equation originally derived from sound propagation theory in a thin boundary layer. This correlating equation represents a part of a simplified liner design method, based upon modal cut-off ratio, for multimodal noise propagation.

  18. Simplified power control method for cellular mobile communication

    NASA Astrophysics Data System (ADS)

    Leung, Y. W.

    1994-04-01

    The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.

  19. K-edge ratio method for identification of multiple nanoparticulate contrast agents by spectral CT imaging

    PubMed Central

    Ghadiri, H; Ay, M R; Shiran, M B; Soltanian-Zadeh, H

    2013-01-01

    Objective: Recently introduced energy-sensitive X-ray CT makes it feasible to discriminate different nanoparticulate contrast materials. The purpose of this work is to present a K-edge ratio method for differentiating multiple simultaneous contrast agents using spectral CT. Methods: The ratio of two images relevant to energy bins straddling the K-edge of the materials is calculated using an analytic CT simulator. In the resulting parametric map, the selected contrast agent regions can be identified using a thresholding algorithm. The K-edge ratio algorithm is applied to spectral images of simulated phantoms to identify and differentiate up to four simultaneous and targeted CT contrast agents. Results: We show that different combinations of simultaneous CT contrast agents can be identified by the proposed K-edge ratio method when energy-sensitive CT is used. In the K-edge parametric maps, the pixel values for biological tissues and contrast agents reach a maximum of 0.95, whereas for the selected contrast agents, the pixel values are larger than 1.10. The number of contrast agents that can be discriminated is limited owing to photon starvation. For reliable material discrimination, minimum photon counts corresponding to 140 kVp, 100 mAs and 5-mm slice thickness must be used. Conclusion: The proposed K-edge ratio method is a straightforward and fast method for identification and discrimination of multiple simultaneous CT contrast agents. Advances in knowledge: A new spectral CT-based algorithm is proposed which provides a new concept of molecular CT imaging by non-iteratively identifying multiple contrast agents when they are simultaneously targeting different organs. PMID:23934964

  20. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.

  1. Signal-to-noise ratio comparison of encoding methods for hyperpolarized noble gas MRI

    NASA Technical Reports Server (NTRS)

    Zhao, L.; Venkatesh, A. K.; Albert, M. S.; Panych, L. P.

    2001-01-01

    Some non-Fourier encoding methods such as wavelet and direct encoding use spatially localized bases. The spatial localization feature of these methods enables optimized encoding for improved spatial and temporal resolution during dynamically adaptive MR imaging. These spatially localized bases, however, have inherently reduced image signal-to-noise ratio compared with Fourier or Hadamad encoding for proton imaging. Hyperpolarized noble gases, on the other hand, have quite different MR properties compared to proton, primarily the nonrenewability of the signal. It could be expected, therefore, that the characteristics of image SNR with respect to encoding method will also be very different from hyperpolarized noble gas MRI compared to proton MRI. In this article, hyperpolarized noble gas image SNRs of different encoding methods are compared theoretically using a matrix description of the encoding process. It is shown that image SNR for hyperpolarized noble gas imaging is maximized for any orthonormal encoding method. Methods are then proposed for designing RF pulses to achieve normalized encoding profiles using Fourier, Hadamard, wavelet, and direct encoding methods for hyperpolarized noble gases. Theoretical results are confirmed with hyperpolarized noble gas MRI experiments. Copyright 2001 Academic Press.

  2. Curvature correction of retinal OCTs using graph-based geometry detection

    NASA Astrophysics Data System (ADS)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  3. A Ratio-Analysis Method to the Dynamics of Excited State Proton Transfer: Pyranine in Water and Micelles.

    PubMed

    Sahu, Kalyanasis; Nandi, Nilanjana; Dolai, Suman; Bera, Avisek

    2018-06-05

    Emission spectrum of a fluorophore undergoing excited state proton transfer (ESPT) often exhibits two distinct bands each representing emissions from protonated and deprotonated forms. The relative contribution of the two bands, best represented by an emission intensity ratio (R) (intensity maximum of the protonated band / intensity maximum of the deprotonated band), is an important parameter which usually denotes feasibility or promptness of the ESPT process. However, the use of ratio is only limited to the interpretation of steady-state fluorescence spectra. Here, for the first time, we exploit the time-dependence of the ratio (R(t)), calculated from time-resolved emission spectra (TRES) at different times, to analyze ESPT dynamics. TRES at different times were fitted with a sum of two lognormal-functions representing each peaks and then, the peak intensity ratio, R(t) was calculated and further fitted with an analytical function. Recently, a time-resolved area-normalized emission spectra (TRANES)-based analysis was presented where the decay of protonated emission or the rise of deprotonated emission intensity conveniently accounts for the ESPT dynamics. We show that these two methods are equivalent but the new method provides more insights on the nature of the ESPT process.

  4. [Identification of some Piper crude drugs based on Fourier transform infrared spectrometry].

    PubMed

    Zhou, Ye; Zhang, Qing-Wei; Luo, Xue-Jun; Li, Pei-Fu; Song, Heng; Zhang, Bo-Li

    2014-09-01

    The common peak ratio and variant peak ratio were calculated by FTIR spectroscopy of seven medicinal plants of Piper. The dual index sequence of common peak ratio and variant peak ratio was established, which showed the sibship of the medicinal plants. The common peak ratio of Piper kadsura (Choisy) Ohwi, Piper wallichii (Miq.) Hand.-Mazz. Piper laetispicum (C. DC.) was greater than 77%, and the variant peak ratio was less than 30%. The results showed the near sibship between the three drugs. The common peak ratio of Piper kadsura (Choisy) Ohwi, Piper nigrum L. and Piper boehmeriae folium Wall (Miq.) C. DC. Var. tonkinense (C. DC.) was about 61% which showed the farther sibship. The common peak ratio of Piper kadsura (Choisy) Ohwi and Piper betle (Linn.) was only 44%, which showed the farthest sibship. Piper kadsura (choisy) Ohwi and its adulterants, such as Piper wallichii (Miq.) Hand. -Mazz., Piper boehmeriaefolium Wall (Miq.) C. DC. Var. tonkinense C. DC. , Piper laetispicum C. DC., Piper nigrum L., could be identified by comparing their second order derivative IR spectrum of the samples. FTIR technique is a non-destructive analysis method which provides information of functional group, type and hydrogen bond without complex pretreatment procedures such as extraction and separatioin. FTIR method has some characteristics such as rapid and simple analysis procedure, good reproducibility, non-destructive testing, few amount of required sample and low cost and is environment-friendly. The method solved the problems of limit in resource of Piper kadsura (Choisy) Ohwi, many fakes and difficulties in identification, and brought the security for the clinical medication. FTIR provides a new method for identification of Piper kadsura (choisy) Ohwi and its fakes and meets the requirement for comprehensive analy sis and global analysis of traditional Chinese medicine.

  5. The efficacy of semi-quantitative urine protein-to-creatinine (P/C) ratio for the detection of significant proteinuria in urine specimens in health screening settings.

    PubMed

    Chang, Chih-Chun; Su, Ming-Jang; Ho, Jung-Li; Tsai, Yu-Hui; Tsai, Wei-Ting; Lee, Shu-Jene; Yen, Tzung-Hai; Chu, Fang-Yeh

    2016-01-01

    Urine protein detection could be underestimated using the conventional dipstick method because of variations in urine aliquots. This study aimed to assess the efficacy of the semi-quantitative urine protein-to-creatinine (P/C) ratio compared with other laboratory methods. Random urine samples were requested from patients undergoing chronic kidney disease screening. Significant proteinuria was determined by the quantitative P/C ratio of at least 150 mg protein/g creatinine. The semi-quantitative P/C ratio, dipstick protein and quantitative protein concentrations were compared and analyzed. In the 2932 urine aliquots, 156 (5.3 %) urine samples were considered as diluted and 60 (39.2 %) were found as significant proteinuria. The semi-quantitative P/C ratio testing had the best sensitivity (70.0 %) and specificity (95.9 %) as well as the lowest underestimation rate (0.37 %) when compared to other laboratory methods in the study. In the semi-quantitative P/C ratio test, 19 (12.2 %) had positive, 52 (33.3 %) had diluted, and 85 (54.5 %) had negative results. Of those with positive results, 7 (36.8 %) were positive detected by traditional dipstick urine protein test, and 9 (47.4 %) were positive detected by quantitative urine protein test. Additionally, of those with diluted results, 25 (48.1 %) had significant proteinuria, and all were assigned as no significant proteinuria by both tests. The semi-quantitative urine P/C ratio is clinically applicable based on its better sensitivity and screening ability for significant proteinuria than other laboratory methods, particularly in diluted urine samples. To establish an effective strategy for CKD prevention, urine protein screening with semi-quantitative P/C ratio could be considered.

  6. Quantitative skeletal maturation estimation using cone-beam computed tomography-generated cervical vertebral images: a pilot study in 5- to 18-year-old Japanese children.

    PubMed

    Byun, Bo-Ram; Kim, Yong-Il; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Hwang, Dea-Seok; Park, Soo-Byung; Son, Woo-Sung

    2015-11-01

    The purpose of this study was to establish multivariable regression models for the estimation of skeletal maturation status in Japanese boys and girls using the cone-beam computed tomography (CBCT)-based cervical vertebral maturation (CVM) assessment method and hand-wrist radiography. The analyzed sample consisted of hand-wrist radiographs and CBCT images from 47 boys and 57 girls. To quantitatively evaluate the correlation between the skeletal maturation status and measurement ratios, a CBCT-based CVM assessment method was applied to the second, third, and fourth cervical vertebrae. Pearson's correlation coefficient analysis and multivariable regression analysis were used to determine the ratios for each of the cervical vertebrae (p < 0.05). Four characteristic parameters ((OH2 + PH2)/W2, (OH2 + AH2)/W2, D2, AH3/W3), as independent variables, were used to build the multivariable regression models: for the Japanese boys, the skeletal maturation status according to the CBCT-based quantitative cervical vertebral maturation (QCVM) assessment was 5.90 + 99.11 × AH3/W3 - 14.88 × (OH2 + AH2)/W2 + 13.24 × D2; for the Japanese girls, it was 41.39 + 59.52 × AH3/W3 - 15.88 × (OH2 + PH2)/W2 + 10.93 × D2. The CBCT-generated CVM images proved very useful to the definition of the cervical vertebral body and the odontoid process. The newly developed CBCT-based QCVM assessment method showed a high correlation between the derived ratios from the second cervical vertebral body and odontoid process. There are high correlations between the skeletal maturation status and the ratios of the second cervical vertebra based on the remnant of dentocentral synchondrosis.

  7. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  8. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method

    PubMed Central

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility. PMID:28187177

  9. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    PubMed

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  10. Ab initio solution of macromolecular crystal structures without direct methods.

    PubMed

    McCoy, Airlie J; Oeffner, Robert D; Wrobel, Antoni G; Ojala, Juha R M; Tryggvason, Karl; Lohkamp, Bernhard; Read, Randy J

    2017-04-04

    The majority of macromolecular crystal structures are determined using the method of molecular replacement, in which known related structures are rotated and translated to provide an initial atomic model for the new structure. A theoretical understanding of the signal-to-noise ratio in likelihood-based molecular replacement searches has been developed to account for the influence of model quality and completeness, as well as the resolution of the diffraction data. Here we show that, contrary to current belief, molecular replacement need not be restricted to the use of models comprising a substantial fraction of the unknown structure. Instead, likelihood-based methods allow a continuum of applications depending predictably on the quality of the model and the resolution of the data. Unexpectedly, our understanding of the signal-to-noise ratio in molecular replacement leads to the finding that, with data to sufficiently high resolution, fragments as small as single atoms of elements usually found in proteins can yield ab initio solutions of macromolecular structures, including some that elude traditional direct methods.

  11. The energy ratio mapping algorithm: a tool to improve the energy-based detection of odontocete echolocation clicks.

    PubMed

    Klinck, Holger; Mellinger, David K

    2011-04-01

    The energy ratio mapping algorithm (ERMA) was developed to improve the performance of energy-based detection of odontocete echolocation clicks, especially for application in environments with limited computational power and energy such as acoustic gliders. ERMA systematically evaluates many frequency bands for energy ratio-based detection of echolocation clicks produced by a target species in the presence of the species mix in a given geographic area. To evaluate the performance of ERMA, a Teager-Kaiser energy operator was applied to the series of energy ratios as derived by ERMA. A noise-adaptive threshold was then applied to the Teager-Kaiser function to identify clicks in data sets. The method was tested for detecting clicks of Blainville's beaked whales while rejecting echolocation clicks of Risso's dolphins and pilot whales. Results showed that the ERMA-based detector correctly identified 81.6% of the beaked whale clicks in an extended evaluation data set. Average false-positive detection rate was 6.3% (3.4% for Risso's dolphins and 2.9% for pilot whales).

  12. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  13. Emergy-based ecological account for the Chinese economy in 2004

    NASA Astrophysics Data System (ADS)

    Jiang, M. M.; Zhou, J. B.; Chen, B.; Chen, G. Q.

    2008-12-01

    This paper provides an integrated study on the ecological account for the Chinese economy in 2004 based on emergy synthesis theory. The detailed flows of the Chinese economy is diagramed, accounted and analyzed in categories using the biophysically based ecological accounting. Through calculating environmental and economic inputs within and outside the Chinese economy, this paper discusses the Chinese international exchange, describes the resource structure, and assesses its sustainability as a whole. Also, the comparison of systematic indicators, such as emergy/dollar ratio, environmental load ratio, and emergy self-support ratio, with those of the other countries is tabled and explored to illustrate the general status of the Chinese economy in the world. Take, for example, the environmental load ratio, which was 9.29 in China 2004, it reveals that the Chinese economy put high pressure on the local environment compared with those of the environment-benign countries, such as Brazil (0.75), Australia (0.86) and New Zealand (0.81). In addition, in this paper, the accounting method of tourism is adjusted based on the previous researches.

  14. High-contrast controllable switching based on polystyrene nonlinear cavities in 2D hole-type photonic crystals

    NASA Astrophysics Data System (ADS)

    Paghousi, Roohollah; Fasihi, Kiazand

    2018-05-01

    We present a new high-contrast controllable switch, which is based on a polystyrene nonlinear cavity, and is implemented in a two dimensional (2D) hole-type photonic crystal (PC). We show that by applying a control signal, the input power can be transmitted to the output waveguide with a high contrast ratio. The operation of the proposed device is investigated through the use of coupled-mode theory (CMT) and finite-difference time-domain (FDTD) method. The contrast ratio of the proposed device varies between 18 and 23, which is higher than the corresponding value in the previous investigations. Based on the simulation results, with increasing the control power the range of operating power will be increased, while the contrast ratio will be decreased. It has been shown that in a modified structure, at the expense of the range of operating power and the contrast ratio, the control power can be decreased, considerably.

  15. Report on Non-invasive acoustic monitoring of D2O concentration Oct 31 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pantea, Cristian; Sinha, Dipen N.; Lakis, Rollin Evan

    There is an urgent need for real-time monitoring of the hydrogen /deuterium ratio (H/D) for heavy water production monitoring. Based upon published literature, sound speed is sensitive to the deuterium content of heavy water and can be measured using existing acoustic methods to determine the deuterium concentration in heavy water solutions. We plan to adapt existing non-invasive acoustic techniques (Swept-Frequency Acoustic Interferometry and Gaussian-pulse acoustic technique) for the purpose of quantifying H/D ratios in solution. A successful demonstration will provide an easily implemented, low cost, and non-invasive method for remote and unattended H/D ratio measurements with a resolution of lessmore » than 0.2% vol.« less

  16. Fatigue crack propagation behavior of stainless steel welds

    NASA Astrophysics Data System (ADS)

    Kusko, Chad S.

    The fatigue crack propagation behavior of austenitic and duplex stainless steel base and weld metals has been investigated using various fatigue crack growth test procedures, ferrite measurement techniques, light optical microscopy, stereomicroscopy, scanning electron microscopy, and optical profilometry. The compliance offset method has been incorporated to measure crack closure during testing in order to determine a stress ratio at which such closure is overcome. Based on this method, an empirically determined stress ratio of 0.60 has been shown to be very successful in overcoming crack closure for all da/dN for gas metal arc and laser welds. This empirically-determined stress ratio of 0.60 has been applied to testing of stainless steel base metal and weld metal to understand the influence of microstructure. Regarding the base metal investigation, for 316L and AL6XN base metals, grain size and grain plus twin size have been shown to influence resulting crack growth behavior. The cyclic plastic zone size model has been applied to accurately model crack growth behavior for austenitic stainless steels when the average grain plus twin size is considered. Additionally, the effect of the tortuous crack paths observed for the larger grain size base metals can be explained by a literature model for crack deflection. Constant Delta K testing has been used to characterize the crack growth behavior across various regions of the gas metal arc and laser welds at the empirically determined stress ratio of 0.60. Despite an extensive range of stainless steel weld metal FN and delta-ferrite morphologies, neither delta-ferrite morphology significantly influence the room temperature crack growth behavior. However, variations in weld metal da/dN can be explained by local surface roughness resulting from large columnar grains and tortuous crack paths in the weld metal.

  17. Theoretical study of depth profiling with gamma- and X-ray spectrometry based on measurements of intensity ratios

    NASA Astrophysics Data System (ADS)

    Bártová, H.; Trojek, T.; Johnová, K.

    2017-11-01

    This article describes the method for the estimation of depth distribution of radionuclides in a material with gamma-ray spectrometry, and the identification of a layered structure of a material with X-ray fluorescence analysis. This method is based on the measurement of a ratio of two gamma or X-ray lines of a radionuclide or a chemical element, respectively. Its principle consists in different attenuation coefficient for these two lines in a measured material. The main aim of this investigation was to show how the detected ratio of these two lines depends on depth distribution of an analyte and mainly how this ratio depends on density and chemical composition of measured materials. Several different calculation arrangements were made and a lot of Monte Carlo simulation with the code MCNP - Monte Carlo N-Particle (Briesmeister, 2000) was performed to answer these questions. For X-ray spectrometry, the calculated Kα/Kβ diagrams were found to be almost independent upon matrix density and composition. Thanks to this phenomenon it would be possible to draw only one Kα/Kβ diagram for an element whose depth distribution is examined.

  18. White matter fiber-based analysis of T1w/T2w ratio map

    NASA Astrophysics Data System (ADS)

    Chen, Haiwei; Budin, Francois; Noel, Jean; Prieto, Juan Carlos; Gilmore, John; Rasmussen, Jerod; Wadhwa, Pathik D.; Entringer, Sonja; Buss, Claudia; Styner, Martin

    2017-02-01

    Purpose: To develop, test, evaluate and apply a novel tool for the white matter fiber-based analysis of T1w/T2w ratio maps quantifying myelin content. Background: The cerebral white matter in the human brain develops from a mostly non-myelinated state to a nearly fully mature white matter myelination within the first few years of life. High resolution T1w/T2w ratio maps are believed to be effective in quantitatively estimating myelin content on a voxel-wise basis. We propose the use of a fiber-tract-based analysis of such T1w/T2w ratio data, as it allows us to separate fiber bundles that a common regional analysis imprecisely groups together, and to associate effects to specific tracts rather than large, broad regions. Methods: We developed an intuitive, open source tool to facilitate such fiber-based studies of T1w/T2w ratio maps. Via its Graphical User Interface (GUI) the tool is accessible to non-technical users. The framework uses calibrated T1w/T2w ratio maps and a prior fiber atlas as an input to generate profiles of T1w/T2w values. The resulting fiber profiles are used in a statistical analysis that performs along-tract functional statistical analysis. We applied this approach to a preliminary study of early brain development in neonates. Results: We developed an open-source tool for the fiber based analysis of T1w/T2w ratio maps and tested it in a study of brain development.

  19. Water isotopologues in the circumstellar envelopes of M-type AGB stars

    NASA Astrophysics Data System (ADS)

    Danilovich, T.; Lombaert, R.; Decin, L.; Karakas, A.; Maercker, M.; Olofsson, H.

    2017-06-01

    Aims: In this study we intend to examine rotational emission lines of two isotopologues of water: H217O and H218O. By determining the abundances of these molecules, we aim to use the derived isotopologue - and hence oxygen isotope - ratios to put constraints on the masses of a sample of M-type AGB stars that have not been classified as OH/IR stars. Methods: We have used detailed radiative transfer analysis based on the accelerated lambda iteration method to model the circumstellar molecular line emission of H217O and H218O for IK Tau, R Dor, W Hya, and R Cas. The emission lines used to constrain our models came from Herschel/HIFI and Herschel/PACS observations and are all optically thick, meaning that full radiative transfer analysis is the only viable method of estimating molecular abundance ratios. Results: We find generally low values of the 17O/18O ratio for our sample, ranging from 0.15 to 0.69. This correlates with relatively low initial masses, in the range 1.0 to 1.5 M⊙ for each source, based on stellar evolutionary models. We also find ortho-to-para ratios close to 3, which are expected from warm formation predictions. Conclusions: The 17O/18O ratios found for this sample are at the lower end of the range predicted by stellar evolutionary models, indicating that the sample chosen had relatively low initial masses. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  20. A New Method for Blood NT-proBNP Determination Based on a Near-infrared Point of Care Testing Device with High Sensitivity and Wide Scope.

    PubMed

    Zhang, Xiao Guang; Shu, Yao Gen; Gao, Ju; Wang, Xuan; Liu, Li Peng; Wang, Meng; Cao, Yu Xi; Zeng, Yi

    2017-06-01

    To develop a rapid, highly sensitive, and quantitative method for the detection of NT-proBNP levels based on a near-infrared point-of-care diagnostic (POCT) device with wide scope. The lateral flow assay (LFA) strip of NT-proBNP was first prepared to achieve rapid detection. Then, the antibody pairs for NT-proBNP were screened and labeled with the near-infrared fluorescent dye Dylight-800. The capture antibody was fixed on a nitrocellulose membrane by a scribing device. Serial dilutions of serum samples were prepared using NT-proBNP-free serum series. The prepared test strips, combined with a near-infrared POCT device, were validated by known concentrations of clinical samples. The POCT device gave the output of the ratio of the intensity of the fluorescence signal of the detection line to that of the quality control line. The relationship between the ratio value and the concentration of the specimen was plotted as a work curve. The results of 62 clinical specimens obtained from our method were compared in parallel with those obtained from the Roche E411 kit. Based on the log-log plot, the new method demonstrated that there was a good linear relationship between the ratio value and NT-proBNP concentrations ranging from 20 pg/mL to 10 ng/mL. The results of the 62 clinical specimens measured by our method showed a good linear correlation with those measured by the Roche E411 kit. The new LFA detection method of NT-proBNP levels based on the near-infrared POCT device was rapid and highly sensitive with wide scope and was thus suitable for rapid and early clinical diagnosis of cardiac impairment. Copyright © 2017 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  1. Simulation Analysis of Computer-Controlled pressurization for Mixture Ratio Control

    NASA Technical Reports Server (NTRS)

    Alexander, Leslie A.; Bishop-Behel, Karen; Benfield, Michael P. J.; Kelley, Anthony; Woodcock, Gordon R.

    2005-01-01

    A procedural code (C++) simulation was developed to investigate potentials for mixture ratio control of pressure-fed spacecraft rocket propulsion systems by measuring propellant flows, tank liquid quantities, or both, and using feedback from these measurements to adjust propellant tank pressures to set the correct operating mixture ratio for minimum propellant residuals. The pressurization system eliminated mechanical regulators in favor of a computer-controlled, servo- driven throttling valve. We found that a quasi-steady state simulation (pressure and flow transients in the pressurization systems resulting from changes in flow control valve position are ignored) is adequate for this purpose. Monte-Carlo methods are used to obtain simulated statistics on propellant depletion. Mixture ratio control algorithms based on proportional-integral-differential (PID) controller methods were developed. These algorithms actually set target tank pressures; the tank pressures are controlled by another PID controller. Simulation indicates this approach can provide reductions in residual propellants.

  2. Rectification of depth measurement using pulsed thermography with logarithmic peak second derivative method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Zeng, Zhi; Shen, Jingling; Zhang, Cunlin; Zhao, Yuejin

    2018-03-01

    Logarithmic peak second derivative (LPSD) method is the most popular method for depth prediction in pulsed thermography. It is widely accepted that this method is independent of defect size. The theoretical model for LPSD method is based on the one-dimensional solution of heat conduction without considering the effect of defect size. When a decay term considering defect aspect ratio is introduced into the solution to correct the three-dimensional thermal diffusion effect, we found that LPSD method is affected by defect size by analytical model. Furthermore, we constructed the relation between the characteristic time of LPSD method and defect aspect ratio, which was verified with the experimental results of stainless steel and glass fiber reinforced plate (GFRP) samples. We also proposed an improved LPSD method for depth prediction when the effect of defect size was considered, and the rectification results of stainless steel and GFRP samples were presented and discussed.

  3. Method for nanomachining high aspect ratio structures

    DOEpatents

    Yun, Wenbing; Spence, John; Padmore, Howard A.; MacDowell, Alastair A.; Howells, Malcolm R.

    2004-11-09

    A nanomachining method for producing high-aspect ratio precise nanostructures. The method begins by irradiating a wafer with an energetic charged-particle beam. Next, a layer of patterning material is deposited on one side of the wafer and a layer of etch stop or metal plating base is coated on the other side of the wafer. A desired pattern is generated in the patterning material on the top surface of the irradiated wafer using conventional electron-beam lithography techniques. Lastly, the wafer is placed in an appropriate chemical solution that produces a directional etch of the wafer only in the area from which the resist has been removed by the patterning process. The high mechanical strength of the wafer materials compared to the organic resists used in conventional lithography techniques with allows the transfer of the precise patterns into structures with aspect ratios much larger than those previously achievable.

  4. Forecast and analysis of the ratio of electric energy to terminal energy consumption for global energy internet

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Zhong, Ming; Cheng, Ling; Jin, Lu; Shen, Si

    2018-02-01

    In the background of building global energy internet, it has both theoretical and realistic significance for forecasting and analysing the ratio of electric energy to terminal energy consumption. This paper firstly analysed the influencing factors of the ratio of electric energy to terminal energy and then used combination method to forecast and analyse the global proportion of electric energy. And then, construct the cointegration model for the proportion of electric energy by using influence factor such as electricity price index, GDP, economic structure, energy use efficiency and total population level. At last, this paper got prediction map of the proportion of electric energy by using the combination-forecasting model based on multiple linear regression method, trend analysis method, and variance-covariance method. This map describes the development trend of the proportion of electric energy in 2017-2050 and the proportion of electric energy in 2050 was analysed in detail using scenario analysis.

  5. Optimization of Gear Ratio in the Tidal Current Generation System based on Generated Energy

    NASA Astrophysics Data System (ADS)

    Naoi, Kazuhisa; Shiono, Mitsuhiro; Suzuki, Katsuyuki

    It is possible to predict generating power of the tidal current generation, because of the tidal current's periodicity. Tidal current generation is more advantageous than other renewable energy sources, when the tidal current generation system is connected to the power system and operated. In this paper, we propose a method used to optimize the gear ratio and generator capacity, that is fundamental design items in the tidal current generation system which is composed of Darrieus type water turbine and squirrel-cage induction generator coupled with gear. The proposed method is applied to the tidal current generation system including the most large-sized turbine that we have developed and studied. This paper shows optimum gear ratio and generator capacity that make generated energy maximum, and verify effectiveness of the proposed method. The paper also proposes a method of selecting maximum generating current velocity in order to reduce the generator capacity, from the viewpoint of economics.

  6. Life Cycle analysis data and results for geothermal and other electricity generation technologies

    DOE Data Explorer

    Sullivan, John

    2013-06-04

    Life cycle analysis (LCA) is an environmental assessment method that quantifies the environmental performance of a product system over its entire lifetime, from cradle to grave. Based on a set of relevant metrics, the method is aptly suited for comparing the environmental performance of competing products systems. This file contains LCA data and results for electric power production including geothermal power. The LCA for electric power has been broken down into two life cycle stages, namely plant and fuel cycles. Relevant metrics include the energy ratio and greenhouse gas (GHG) ratios, where the former is the ratio of system input energy to total lifetime electrical energy out and the latter is the ratio of the sum of all incurred greenhouse gases (in CO2 equivalents) divided by the same energy output. Specific information included herein are material to power (MPR) ratios for a range of power technologies for conventional thermoelectric, renewables (including three geothermal power technologies), and coproduced natural gas/geothermal power. For the geothermal power scenarios, the MPRs include the casing, cement, diesel, and water requirements for drilling wells and topside piping. Also included herein are energy and GHG ratios for plant and fuel cycle stages for the range of considered electricity generating technologies. Some of this information are MPR data extracted directly from the literature or from models (eg. ICARUS – a subset of ASPEN models) and others (energy and GHG ratios) are results calculated using GREET models and MPR data. MPR data for wells included herein were based on the Argonne well materials model and GETEM well count results.

  7. Prognostic value of inflammation-based markers in patients with pancreatic cancer administered gemcitabine and erlotinib.

    PubMed

    Lee, Jae Min; Lee, Hong Sik; Hyun, Jong Jin; Choi, Hyuk Soon; Kim, Eun Sun; Keum, Bora; Seo, Yeon Seok; Jeen, Yoon Tae; Chun, Hoon Jai; Um, Soon Ho; Kim, Chang Duck

    2016-07-15

    To evaluate the value of systemic inflammation-based markers as prognostic factors for advanced pancreatic cancer (PC). Data from 82 patients who underwent combination chemotherapy with gemcitabine and erlotinib for PC from 2011 to 2014 were collected retrospectively. Data that included the neutrophil-to-lymphocyte ratio (NLR), the platelet-to-lymphocyte ratio, and the C-reactive protein (CRP)-to-albumin (CRP/Alb) ratio were analyzed. Kaplan-Meier curves, and univariate and multivariate Cox proportional hazards regression analyses were used to identify the prognostic factors associated with progression-free survival (PFS) and overall survival (OS). The univariate analysis demonstrated the prognostic value of the NLR (P = 0.049) and the CRP/Alb ratio (P = 0.047) in relation to PFS, and a positive relationship between an increase in inflammation-based markers and a poor prognosis in relation to OS. The multivariate analysis determined that an increased NLR (hazard ratio = 2.76, 95%CI: 1.33-5.75, P = 0.007) is an independent prognostic factor for poor OS. There was no association between the PLR and the patients' prognoses in those who had received chemotherapy that comprised gemcitabine and erlotinib in combination. The Kaplan-Meier method and the log-rank test determined significantly worse outcomes in relation to PFS and OS in patients with an NLR > 5 or a CRP/Alb ratio > 5. Systemic inflammation-based markers, including increases in the NLR and the CRP/Alb ratio, may be useful for predicting PC prognoses.

  8. [The research and application of pretreatment method for matrix-assisted laser desorption ionization-time of flight mass spectrometry identification of filamentous fungi].

    PubMed

    Huang, Y F; Chang, Z; Bai, J; Zhu, M; Zhang, M X; Wang, M; Zhang, G; Li, X Y; Tong, Y G; Wang, J L; Lu, X X

    2017-08-08

    Objective: To establish and evaluate the feasibility of a pretreatment method for matrix-assisted laser desorption ionization-time of flight mass spectrometry identification of filamentous fungi developed by the laboratory. Methods: Three hundred and eighty strains of filamentous fungi from January 2014 to December 2016 were recovered and cultured on sabouraud dextrose agar (SDA) plate at 28 ℃ to mature state. Meanwhile, the fungi were cultured in liquid sabouraud medium with a vertical rotation method recommended by Bruker and a horizontal vibration method developed by the laboratory until adequate amount of colonies were observed. For the strains cultured with the three methods, protein was extracted with modified magnetic bead-based extraction method for mass spectrum identification. Results: For 380 fungi strains, it took 3-10 d to culture with SDA culture method, and the ratio of identification of the species and genus was 47% and 81%, respectively; it took 5-7 d to culture with vertical rotation method, and the ratio of identification of the species and genus was 76% and 94%, respectively; it took 1-2 d to culture with horizontal vibration method, and the ratio of identification of the species and genus was 96% and 99%, respectively. For the comparison between horizontal vibration method and SDA culture method comparison, the difference was statistically significant (χ(2)=39.026, P <0.01); for the comparison between horizontal vibration method and vertical rotation method recommended by Bruker, the difference was statistically significant(χ(2)=11.310, P <0.01). Conclusion: The horizontal vibration method and modified magnetic bead-based extraction method developed by the laboratory is superior to the method recommended by Bruker and SDA culture method in terms of the identification capacity for filamentous fungi, which can be applied in clinic.

  9. DISENTANGLING AGN AND STAR FORMATION ACTIVITY AT HIGH REDSHIFT USING HUBBLE SPACE TELESCOPE GRISM SPECTROSCOPY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridge, Joanna S.; Zeimann, Gregory R.; Trump, Jonathan R.

    2016-08-01

    Differentiating between active galactic nucleus (AGN) activity and star formation in z ∼ 2 galaxies is difficult because traditional methods, such as line-ratio diagnostics, change with redshift, while multi-wavelength methods (X-ray, radio, IR) are sensitive to only the brightest AGNs. We have developed a new method for spatially resolving emission lines using the Hubble Space Telescope /Wide Field Camera 3 G141 grism spectra and quantifying AGN activity through the spatial gradient of the [O iii]/H β line ratio. Through detailed simulations, we show that our novel line-ratio gradient approach identifies ∼40% more low-mass and obscured AGNs than obtained by classicalmore » methods. Based on our simulations, we developed a relationship that maps the stellar mass, star formation rate, and measured [O iii]/H β gradient to the AGN Eddington ratio. We apply our technique to previously studied stacked samples of galaxies at z ∼ 2 and find that our results are consistent with these studies. This gradient method will also be able to inform other areas of galaxy evolution science, such as inside-out quenching and metallicity gradients, and will be widely applicable to future spatially resolved James Webb Space Telescope data.« less

  10. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  11. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  12. Electrochemical Deposition of Conformal and Functional Layers on High Aspect Ratio Silicon Micro/Nanowires.

    PubMed

    Ozel, Tuncay; Zhang, Benjamin A; Gao, Ruixuan; Day, Robert W; Lieber, Charles M; Nocera, Daniel G

    2017-07-12

    Development of new synthetic methods for the modification of nanostructures has accelerated materials design advances to furnish complex architectures. Structures based on one-dimensional (1D) silicon (Si) structures synthesized using top-down and bottom-up methods are especially prominent for diverse applications in chemistry, physics, and medicine. Yet further elaboration of these structures with distinct metal-based and polymeric materials, which could open up new opportunities, has been difficult. We present a general electrochemical method for the deposition of conformal layers of various materials onto high aspect ratio Si micro- and nanowire arrays. The electrochemical deposition of a library of coaxial layers comprising metals, metal oxides, and organic/inorganic semiconductors demonstrate the materials generality of the synthesis technique. Depositions may be performed on wire arrays with varying diameter (70 nm to 4 μm), pitch (5 μ to 15 μ), aspect ratio (4:1 to 75:1), shape (cylindrical, conical, hourglass), resistivity (0.001-0.01 to 1-10 ohm/cm 2 ), and substrate orientation. Anisotropic physical etching of wires with one or more coaxial shells yields 1D structures with exposed tips that can be further site-specifically modified by an electrochemical deposition approach. The electrochemical deposition methodology described herein features a wafer-scale synthesis platform for the preparation of multifunctional nanoscale devices based on a 1D Si substrate.

  13. Establishing Ion Ratio Thresholds Based on Absolute Peak Area for Absolute Protein Quantification using Protein Cleavage Isotope Dilution Mass Spectrometry

    PubMed Central

    Loziuk, Philip L.; Sederoff, Ronald R.; Chiang, Vincent L.; Muddiman, David C.

    2014-01-01

    Quantitative mass spectrometry has become central to the field of proteomics and metabolomics. Selected reaction monitoring is a widely used method for the absolute quantification of proteins and metabolites. This method renders high specificity using several product ions measured simultaneously. With growing interest in quantification of molecular species in complex biological samples, confident identification and quantitation has been of particular concern. A method to confirm purity or contamination of product ion spectra has become necessary for achieving accurate and precise quantification. Ion abundance ratio assessments were introduced to alleviate some of these issues. Ion abundance ratios are based on the consistent relative abundance (RA) of specific product ions with respect to the total abundance of all product ions. To date, no standardized method of implementing ion abundance ratios has been established. Thresholds by which product ion contamination is confirmed vary widely and are often arbitrary. This study sought to establish criteria by which the relative abundance of product ions can be evaluated in an absolute quantification experiment. These findings suggest that evaluation of the absolute ion abundance for any given transition is necessary in order to effectively implement RA thresholds. Overall, the variation of the RA value was observed to be relatively constant beyond an absolute threshold ion abundance. Finally, these RA values were observed to fluctuate significantly over a 3 year period, suggesting that these values should be assessed as close as possible to the time at which data is collected for quantification. PMID:25154770

  14. Product Quality Research Institute evaluation of cascade impactor profiles of pharmaceutical aerosols: part 2--evaluation of a method for determining equivalence.

    PubMed

    Christopher, David; Adams, Wallace P; Lee, Douglas S; Morgan, Beth; Pan, Ziqing; Singh, Gur Jai Pal; Tsong, Yi; Lyapustina, Svetlana

    2007-01-19

    The purpose of this article is to present the thought process, methods, and interim results of a PQRI Working Group, which was charged with evaluating the chi-square ratio test as a potential method for determining in vitro equivalence of aerodynamic particle size distribution (APSD) profiles obtained from cascade impactor measurements. Because this test was designed with the intention of being used as a tool in regulatory review of drug applications, the capability of the test to detect differences in APSD profiles correctly and consistently was evaluated in a systematic way across a designed space of possible profiles. To establish a "base line," properties of the test in the simplest case of pairs of identical profiles were studied. Next, the test's performance was studied with pairs of profiles, where some difference was simulated in a systematic way on a single deposition site using realistic product profiles. The results obtained in these studies, which are presented in detail here, suggest that the chi-square ratio test in itself is not sufficient to determine equivalence of particle size distributions. This article, therefore, introduces the proposal to combine the chi-square ratio test with a test for impactor-sized mass based on Population Bioequivalence and describes methods for evaluating discrimination capabilities of the combined test. The approaches and results described in this article elucidate some of the capabilities and limitations of the original chi-square ratio test and provide rationale for development of additional tests capable of comparing APSD profiles of pharmaceutical aerosols.

  15. Antiretroviral Regimens and CD4/CD8 Ratio Normalization in HIV-Infected Patients during the Initial Year of Treatment: A Cohort Study

    PubMed Central

    De Salvador-Guillouët, F.; Sakarovitch, C.; Durant, J.; Risso, K.; Demonchy, E.; Roger, P. M.; Fontas, E.

    2015-01-01

    Background As CD4/CD8 ratio inversion has been associated with non-AIDS morbidity and mortality, predictors of ratio normalization after cART need to be studied. Here, we aimed to investigate the association of antiretroviral regimens with CD4/CD8 ratio normalization within an observational cohort. Methods We selected, from a French cohort at the Nice University Hospital, HIV-1 positive treatment-naive patients who initiated cART between 2000 and 2011 with a CD4/CD8 ratio <1. Association between cART and ratio normalization (>1) in the first year was assessed using multivariate logistic regression models. Specific association with INSTI-containing regimens was examined. Results 567 patients were included in the analyses; the median CD4/CD8 ratio was 0.36. Respectively, 52.9%, 29.6% and 10.4% initiated a PI-based, NNRTI-based or NRTI-based cART regimens. About 8% of the population started an INSTI-containing regimen. 62 (10.9%) patients achieved a CD4/CD8 ratio ≥1 (N group). cART regimen was not associated with normalization when coded as PI-, NNRTI- or NRTI-based regimen. However, when considering INSTI-containing regimens alone, there was a strong association with normalization [OR, 7.67 (2.54–23.2)]. Conclusions Our findings suggest an association between initiation of an INSTI-containing regimen and CD4/CD8 ratio normalization at one year in naïve patients. Should it be confirmed in a larger population, it would be another argument for their use as first-line regimen as it is recommended in the recent update of the “Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents”. PMID:26485149

  16. Evaluation method for the potential functionome harbored in the genome and metagenome

    PubMed Central

    2012-01-01

    Background One of the main goals of genomic analysis is to elucidate the comprehensive functions (functionome) in individual organisms or a whole community in various environments. However, a standard evaluation method for discerning the functional potentials harbored within the genome or metagenome has not yet been established. We have developed a new evaluation method for the potential functionome, based on the completion ratio of Kyoto Encyclopedia of Genes and Genomes (KEGG) functional modules. Results Distribution of the completion ratio of the KEGG functional modules in 768 prokaryotic species varied greatly with the kind of module, and all modules primarily fell into 4 patterns (universal, restricted, diversified and non-prokaryotic modules), indicating the universal and unique nature of each module, and also the versatility of the KEGG Orthology (KO) identifiers mapped to each one. The module completion ratio in 8 phenotypically different bacilli revealed that some modules were shared only in phenotypically similar species. Metagenomes of human gut microbiomes from 13 healthy individuals previously determined by the Sanger method were analyzed based on the module completion ratio. Results led to new discoveries in the nutritional preferences of gut microbes, believed to be one of the mutualistic representations of gut microbiomes to avoid nutritional competition with the host. Conclusions The method developed in this study could characterize the functionome harbored in genomes and metagenomes. As this method also provided taxonomical information from KEGG modules as well as the gene hosts constructing the modules, interpretation of completion profiles was simplified and we could identify the complementarity between biochemical functions in human hosts and the nutritional preferences in human gut microbiomes. Thus, our method has the potential to be a powerful tool for comparative functional analysis in genomics and metagenomics, able to target unknown environments containing various uncultivable microbes within unidentified phyla. PMID:23234305

  17. A Biomechanical Model for Lung Fibrosis in Proton Beam Therapy

    NASA Astrophysics Data System (ADS)

    King, David J. S.

    The physics of protons makes them well-suited to conformal radiotherapy due to the well-known Bragg peak effect. From a proton's inherent stopping power, uncertainty effects can cause a small amount of dose to overflow to an organ at risk (OAR). Previous models for calculating normal tissue complication probabilities (NTCPs) relied on the equivalent uniform dose model (EUD), in which the organ was split into 1/3, 2/3 or whole organ irradiation. However, the problem of dealing with volumes <1/3 of the total volume renders this EUD based approach no longer applicable. In this work the case for an experimental data-based replacement at low volumes is investigated. Lung fibrosis is investigated as an NTCP effect typically arising from dose overflow from tumour irradiation at the spinal base. Considering a 3D geometrical model of the lungs, irradiations are modelled with variable parameters of dose overflow. To calculate NTCPs without the EUD model, experimental data is used from the quantitative analysis of normal tissue effects in the clinic (QUANTEC) data. Additional side projects are also investigated, introduced and explained at various points. A typical radiotherapy course for the patient of 30x2Gy per fraction is simulated. A range of geometry of the target volume and irradiation types is investigated. Investigations with X-rays found the majority of the data point ratios (ratio of EUD values found from calculation based and data based methods) at 20% within unity showing a relatively close agreement. The ratios have not systematically preferred one particular type of predictive method. No Vx metric was found to consistently outperform another. In certain cases there is a good agreement and not in other cases which can be found predicted in the literature. The overall results leads to conclusion that there is no reason to discount the use of the data based predictive method particularly, as a low volume replacement predictive method.

  18. Frequency analysis of electroencephalogram recorded from a bottlenose dolphin (Tursiops truncatus) with a novel method during transportation by truck

    PubMed Central

    Tamura, Shinichi; Okada, Yasunori; Morimoto, Shigeru; Ohta, Mitsuaki; Uchida, Naoyuki

    2010-01-01

    In order to obtain information regarding the correlation between an electroencephalogram (EEG) and the state of a dolphin, we developed a noninvasive recording method of EEG of a bottlenose dolphin (Tursiops truncatus) and an extraction method of true-EEG (EEG) from recorded-EEG (R-EEG) based on a human EEG recording method, and then carried out frequency analysis during transportation by truck. The frequency detected in the EEG of dolphin during apparent awakening was divided conveniently into three bands (5–15, 15–25, and 25–40 Hz) based on spectrum profiles. Analyses of the relationship between power ratio and movement of the dolphin revealed that the power ratio of dolphin in a situation when it was being quiet was evenly distributed among the three bands. These results suggested that the EEG of a dolphin could be detected accurately by this method, and that the frequency analysis of the detected EEG seemed to provide useful information for understanding the central nerve activity of these animals. PMID:20429047

  19. Image-Guided Rendering with an Evolutionary Algorithm Based on Cloud Model

    PubMed Central

    2018-01-01

    The process of creating nonphotorealistic rendering images and animations can be enjoyable if a useful method is involved. We use an evolutionary algorithm to generate painterly styles of images. Given an input image as the reference target, a cloud model-based evolutionary algorithm that will rerender the target image with nonphotorealistic effects is evolved. The resulting animations have an interesting characteristic in which the target slowly emerges from a set of strokes. A number of experiments are performed, as well as visual comparisons, quantitative comparisons, and user studies. The average scores in normalized feature similarity of standard pixel-wise peak signal-to-noise ratio, mean structural similarity, feature similarity, and gradient similarity based metric are 0.486, 0.628, 0.579, and 0.640, respectively. The average scores in normalized aesthetic measures of Benford's law, fractal dimension, global contrast factor, and Shannon's entropy are 0.630, 0.397, 0.418, and 0.708, respectively. Compared with those of similar method, the average score of the proposed method, except peak signal-to-noise ratio, is higher by approximately 10%. The results suggest that the proposed method can generate appealing images and animations with different styles by choosing different strokes, and it would inspire graphic designers who may be interested in computer-based evolutionary art. PMID:29805440

  20. Change detection from synthetic aperture radar images based on neighborhood-based ratio and extreme learning machine

    NASA Astrophysics Data System (ADS)

    Gao, Feng; Dong, Junyu; Li, Bo; Xu, Qizhi; Xie, Cui

    2016-10-01

    Change detection is of high practical value to hazard assessment, crop growth monitoring, and urban sprawl detection. A synthetic aperture radar (SAR) image is the ideal information source for performing change detection since it is independent of atmospheric and sunlight conditions. Existing SAR image change detection methods usually generate a difference image (DI) first and use clustering methods to classify the pixels of DI into changed class and unchanged class. Some useful information may get lost in the DI generation process. This paper proposed an SAR image change detection method based on neighborhood-based ratio (NR) and extreme learning machine (ELM). NR operator is utilized for obtaining some interested pixels that have high probability of being changed or unchanged. Then, image patches centered at these pixels are generated, and ELM is employed to train a model by using these patches. Finally, pixels in both original SAR images are classified by the pretrained ELM model. The preclassification result and the ELM classification result are combined to form the final change map. The experimental results obtained on three real SAR image datasets and one simulated dataset show that the proposed method is robust to speckle noise and is effective to detect change information among multitemporal SAR images.

  1. Multilayer material characterization using thermographic signal reconstruction

    NASA Astrophysics Data System (ADS)

    Shepard, Steven M.; Beemer, Maria Frendberg

    2016-02-01

    Active-thermography has become a well-established Nondestructive Testing (NDT) method for detection of subsurface flaws. In its simplest form, flaw detection is based on visual identification of contrast between a flaw and local intact regions in an IR image sequence of the surface temperature as the sample responds to thermal stimulation. However, additional information and insight can be obtained from the sequence, even in the absence of a flaw, through analysis of the logarithmic derivatives of individual pixel time histories using the Thermographic Signal Reconstruction (TSR) method. For example, the response of a flaw-free multilayer sample to thermal stimulation can be viewed as a simple transition between the responses of infinitely thick samples of the individual constituent layers over the lifetime of the thermal diffusion process. The transition is represented compactly and uniquely by the logarithmic derivatives, based on the ratio of thermal effusivities of the layers. A spectrum of derivative responses relative to thermal effusivity ratios allows prediction of the time scale and detectability of the interface, and measurement of the thermophysical properties of one layer if the properties of the other are known. A similar transition between steady diffusion states occurs for flat bottom holes, based on the hole aspect ratio.

  2. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  3. Survival analysis using inverse probability of treatment weighted methods based on the generalized propensity score.

    PubMed

    Sugihara, Masahiro

    2010-01-01

    In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log-rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan-Meier estimators of survival curves using an IPTW log-rank test for multi-valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. (c) 2009 John Wiley & Sons, Ltd.

  4. Can T1 w/T2 w ratio be used as a myelin-specific measure in subcortical structures? Comparisons between FSE-based T1 w/T2 w ratios, GRASE-based T1 w/T2 w ratios and multi-echo GRASE-based myelin water fractions.

    PubMed

    Uddin, Md Nasir; Figley, Teresa D; Marrie, Ruth Ann; Figley, Chase R

    2018-03-01

    Given the growing popularity of T 1 -weighted/T 2 -weighted (T 1 w/T 2 w) ratio measurements, the objective of the current study was to evaluate the concordance between T 1 w/T 2 w ratios obtained using conventional fast spin echo (FSE) versus combined gradient and spin echo (GRASE) sequences for T 2 w image acquisition, and to compare the resulting T 1 w/T 2 w ratios with histologically validated myelin water fraction (MWF) measurements in several subcortical brain structures. In order to compare these measurements across a relatively wide range of myelin concentrations, whole-brain T 1 w magnetization prepared rapid acquisition gradient echo (MPRAGE), T 2 w FSE and three-dimensional multi-echo GRASE data were acquired from 10 participants with multiple sclerosis at 3 T. Then, after high-dimensional, non-linear warping, region of interest (ROI) analyses were performed to compare T 1 w/T 2 w ratios and MWF estimates (across participants and brain regions) in 11 bilateral white matter (WM) and four bilateral subcortical grey matter (SGM) structures extracted from the JHU_MNI_SS 'Eve' atlas. Although the GRASE sequence systematically underestimated T 1 w/T 2 w values compared to the FSE sequence (revealed by Bland-Altman and mountain plots), linear regressions across participants and ROIs revealed consistently high correlations between the two methods (r 2 = 0.62 for all ROIs, r 2 = 0.62 for WM structures and r 2 = 0.73 for SGM structures). However, correlations between either FSE-based or GRASE-based T 1 w/T 2 w ratios and MWFs were extremely low in WM structures (FSE-based, r 2 = 0.000020; GRASE-based, r 2 = 0.0014), low across all ROIs (FSE-based, r 2 = 0.053; GRASE-based, r 2 = 0.029) and moderate in SGM structures (FSE-based, r 2 = 0.20; GRASE-based, r 2 = 0.17). Overall, our findings indicated a high degree of correlation (but not equivalence) between FSE-based and GRASE-based T 1 w/T 2 w ratios, and low correlations between T 1 w/T 2 w ratios and MWFs. This suggests that the two T 1 w/T 2 w ratio approaches measure similar facets of subcortical tissue microstructure, whereas T 1 w/T 2 w ratios and MWFs appear to be sensitized to different microstructural properties. On this basis, we conclude that multi-echo GRASE sequences can be used in future studies to efficiently elucidate both general (T 1 w/T 2 w ratio) and myelin-specific (MWF) tissue characteristics. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  6. Estimating Transmissivity from the Water Level Fluctuations of a Sinusoidally Forced Well

    USGS Publications Warehouse

    Mehnert, E.; Valocchi, A.J.; Heidari, M.; Kapoor, S.G.; Kumar, P.

    1999-01-01

    The water levels in wells are known to fluctuate in response to earth tides and changes in atmospheric pressure. These water level fluctuations can be analyzed to estimate transmissivity (T). A new method to estimate transmissivity, which assumes that the atmospheric pressure varies in a sinusoidal fashion, is presented. Data analysis for this simplified method involves using a set of type curves and estimating the ratio of the amplitudes of the well response over the atmospheric pressure. Type curves for this new method were generated based on a model for ground water flow between the well and aquifer developed by Cooper et al. (1965). Data analysis with this method confirmed these published results: (1) the amplitude ratio is a function of transmissivity, the well radius, and the frequency of the sinusoidal oscillation; and (2) the amplitude ratio is a weak function of storativity. Compared to other methods, the developed method involves simpler, more intuitive data analysis and allows shorter data sets to be analyzed. The effect of noise on estimating the amplitude ratio was evaluated and found to be more significant at lower T. For aquifers with low T, noise was shown to mask the water level fluctuations induced by atmospheric pressure changes. In addition, reducing the length of the data series did not affect the estimate of T, but the variance of the estimate was higher for the shorter series of noisy data.

  7. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Measuring the load-bearing ratio between mucosa and abutments beneath implant- and tooth-supported overdentures: an in vivo preliminary study.

    PubMed

    Ando, Takanori; Maeda, Yoshinobu; Wada, Masahiro; Gonda, Tomoya

    2011-01-01

    The aim of this study was to establish a method for in vivo examination of the load-bearing ratio between mucosa and abutments beneath an overdenture. Two patients wearing a four tooth-supported or a four implant-supported overdenture were enrolled in this study. Recordings were performed with the metal framework only or with a metal framework and a denture base. The force value with the framework only was designated as 100%, and the tissue-supporting ratio (TSR) with the denture base was calculated. The TSR was approximately 30% to 40% in both subjects, regardless of the load. These data suggest that measurement of a TSR beneath an overdenture is feasible.

  9. Vertical and horizontal proportions of the face and their correlation to phi among Indians in Moradabad population: A survey.

    PubMed

    Anand, Shruti; Tripathi, Siddhi; Chopra, Anubhav; Khaneja, Karan; Agarwal, Swatantra

    2015-01-01

    The purpose was to examine the existence of divine proportions among the Indian faces in Moradabad population. Totally, 100 patients (50 males; 50 females) aged 25-45 years were selected for the study. All facial photographs were analyzed based on the method of Ricketts assessing the divine proportions in vertical and transverse facial planes. Six horizontal and seven vertical ratios were determined, which were then compared with the phi ratio. The horizontal ratio results showed that three male and female ratios were not significantly different from each other (P > 0.05), and interchilion/nose width ratio was highly significant (P < 0.001). The horizontal mean ratios for females as well as males were highly significant from the phi ratio (P < 0.001) except for interchilion/interdacryon ratio, which was significant (P < 0.05) for females and not significant (P > 0.05) for males. The vertical ratio results showed that there was a highly significant difference (P < 0.001) for forehead height/stomion-soft menton ratio and no significant difference for two ratios between the mean ratios of males and females. All the vertical mean ratios for both the groups were highly significant (P < 0.001), except for the intereye-soft menton/intereye-stomion ratio, which was significant (P < 0.05) for female group and not significant (P > 0.05) for the male group. Although, the golden proportion is a prominent and recurring theme in esthetics, it should not be embraced as the only method by which human beauty is measured to the exclusion of others factors.

  10. Dynamic magnetic resonance imaging method based on golden-ratio cartesian sampling and compressed sensing.

    PubMed

    Li, Shuo; Zhu, Yanchun; Xie, Yaoqin; Gao, Song

    2018-01-01

    Dynamic magnetic resonance imaging (DMRI) is used to noninvasively trace the movements of organs and the process of drug delivery. The results can provide quantitative or semiquantitative pathology-related parameters, thus giving DMRI great potential for clinical applications. However, conventional DMRI techniques suffer from low temporal resolution and long scan time owing to the limitations of the k-space sampling scheme and image reconstruction algorithm. In this paper, we propose a novel DMRI sampling scheme based on a golden-ratio Cartesian trajectory in combination with a compressed sensing reconstruction algorithm. The results of two simulation experiments, designed according to the two major DMRI techniques, showed that the proposed method can improve the temporal resolution and shorten the scan time and provide high-quality reconstructed images.

  11. Improved modified energy ratio method using a multi-window approach for accurate arrival picking

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun

    2017-04-01

    To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.

  12. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID:25750470

  13. Dynamic disulfide/thiol homeostasis in lead exposure denoted by a novel method.

    PubMed

    Bal, Ceylan; Ağış, Erol Rauf; Gündüzöz, Meşide; Büyükşekerci, Murat; Alışık, Murat; Şen, Orhan; Tutkun, Engin; Yılmaz, Ömer Hınç

    2017-05-01

    Lead is a toxic heavy metal, and prevention of human exposure to lead has not been accomplished yet. The toxicity of lead is continually being investigated, and the molecular mechanisms of its toxicity are still being revealed. In this study, we used a novel method to examine thiol (SH)/disulfide homeostasis in workers who were occupationally exposed to lead. A total of 80 such workers and 70 control subjects were evaluated, and their native and total SH values were measured in serum using a novel method; their blood lead levels were also assessed. The novel method used for SH measurements was based on the principle of measuring native SH, after which disulfide bonds were reduced and total SHs were measured. These measurements allowed us to calculate disulfide amounts, disulfide/total SH percent ratios, disulfide/native SH percent ratios, and native SH /total SH percent ratios. We found that disulfide levels were significantly higher in workers who were exposed to lead (21.08(11.1-53.6) vs. 17.9(1.7-25), p < 0.001). Additionally, the disulfide/native SH and disulfide/total SH percent ratios were higher in exposed workers, while the native SH/total SH percent ratios were higher in the control subjects. Furthermore, the lead and disulfide levels showed a positive correlation, with p < 0.001 and a correlation coefficient of 0.378. Finally, the novel method used in this study successfully showed a switch from SH to disulfide after lead exposure, and the method is fully automated, easy, cheap, reliable, and reproducible. Use of this method in future cases may provide valuable insights into the management of lead exposure.

  14. Research on the dynamic response of high-contact-ratio spur gears influenced by surface roughness under EHL condition

    NASA Astrophysics Data System (ADS)

    Huang, Kang; Xiong, Yangshou; Wang, Tao; Chen, Qi

    2017-01-01

    Employing high-contact-ratio (HCR) gear is an effective method of decreasing the load on a single tooth, as well as reducing vibration and noise. While the spindlier tooth leads to greater relative sliding, having more teeth participate in contact at the same time makes the HCR gear more sensitive to the surface quality. Available literature regarding HCR gear primarily investigates the geometrical optimization, load distribution, or efficiency calculation. Limited work has been conducted on the effect of rough surfaces on the dynamic performance of HCR gear. For this reason, a multi-degree-of-freedom (MDOF) model is presented mathematically to characterize the static transmission error based on fractal theory, investigate the relative sliding friction using an EHL-based friction coefficient formula, and detail the time-varying friction coefficient suitable for HCR gear. Based on numerical results, the surface roughness has little influence on system response in terms of the dynamic transmission error but has a large effect on the motion in off-line-of-action (OLOA) direction and friction force. The impact of shaft-bearing stiffness and damping ratio is also explored with results revealing that a greater shaft-bearing stiffness is beneficial in obtaining a more stable motion in OLOA direction, and a larger damping ratio results in a smaller effective friction force. The theory presented in this report outlines a new method of analyzing the dynamics of HCR gear in respect of introducing surface roughness into MDOF model directly, as well as establishing an indirect relationship between dynamic responses and surface roughness. This method is expected to guide surface roughness design and manufacturing in the future.

  15. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    PubMed Central

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  16. Systems and Methods for Implementing Bulk Metallic Glass-Based Macroscale Compliant Mechanisms

    NASA Technical Reports Server (NTRS)

    Hofmann, Douglas C. (Inventor); Agnes, Gregory (Inventor)

    2017-01-01

    Systems and methods in accordance with embodiments of the invention implement bulk metallic glass-based macroscale compliant mechanisms. In one embodiment, a bulk metallic glass-based macroscale compliant mechanism includes: a flexible member that is strained during the normal operation of the compliant mechanism; where the flexible member has a thickness of 0.5 mm; where the flexible member comprises a bulk metallic glass-based material; and where the bulk metallic glass-based material can survive a fatigue test that includes 1000 cycles under a bending loading mode at an applied stress to ultimate strength ratio of 0.25.

  17. High Rayleigh number convection in rectangular enclosures with differentially heated vertical walls and aspect ratios between zero and unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassemi, S.A.

    1988-04-01

    High Rayleigh number convection in a rectangular cavity with insulated horizontal surfaces and differentially heated vertical walls was analyzed for an arbitrary aspect ratio smaller than or equal to unity. Unlike previous analytical studies, a systematic method of solution based on linearization technique and analytical iteration procedure was developed to obtain approximate closed-form solutions for a wide range of aspect ratios. The predicted velocity and temperature fields are shown to be in excellent agreement with available experimental and numerical data.

  18. High Rayleigh number convection in rectangular enclosures with differentially heated vertical walls and aspect ratios between zero and unity

    NASA Technical Reports Server (NTRS)

    Kassemi, Siavash A.

    1988-01-01

    High Rayleigh number convection in a rectangular cavity with insulated horizontal surfaces and differentially heated vertical walls was analyzed for an arbitrary aspect ratio smaller than or equal to unity. Unlike previous analytical studies, a systematic method of solution based on linearization technique and analytical iteration procedure was developed to obtain approximate closed-form solutions for a wide range of aspect ratios. The predicted velocity and temperature fields are shown to be in excellent agreement with available experimental and numerical data.

  19. X-ray dual energy spectral parameter optimization for bone Calcium/Phosphorus mass ratio estimation

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, P. I.; Fountos, G. P.; Martini, N. D.; Koukou, V. N.; Michail, C. M.; Valais, I. G.; Kandarakis, I. S.; Nikiforidis, G. C.

    2015-09-01

    Calcium (Ca) and Phosphorus (P) bone mass ratio has been identified as an important, yet underutilized, risk factor in osteoporosis diagnosis. The purpose of this simulation study is to investigate the use of effective or mean mass attenuation coefficient in Ca/P mass ratio estimation with the use of a dual-energy method. The investigation was based on the minimization of the accuracy of Ca/P ratio, with respect to the Coefficient of Variation of the ratio. Different set-ups were examined, based on the K-edge filtering technique and single X-ray exposure. The modified X-ray output was attenuated by various Ca/P mass ratios resulting in nine calibration points, while keeping constant the total bone thickness. The simulated data were obtained considering a photon counting energy discriminating detector. The standard deviation of the residuals was used to compare and evaluate the accuracy between the different dual energy set-ups. The optimum mass attenuation coefficient for the Ca/P mass ratio estimation was the effective coefficient in all the examined set-ups. The variation of the residuals between the different set-ups was not significant.

  20. Toward a simple, repeatable, non-destructive approach to measuring stable-isotope ratios of water within tree stems

    NASA Astrophysics Data System (ADS)

    Raulerson, S.; Volkmann, T.; Pangle, L. A.

    2017-12-01

    Traditional methodologies for measuring ratios of stable isotopes within the xylem water of trees involve destructive coring of the stem. A recent approach involves permanently installed probes within the stem, and an on-site assembly of pumps, switching valves, gas lines, and climate-controlled structure for field deployment of a laser spectrometer. The former method limits the possible temporal resolution of sampling, and sample size, while the latter may not be feasible for many research groups. We present results from initial laboratory efforts towards developing a non-destructive, temporally-resolved technique for measuring stable isotope ratios within the xylem flow of trees. Researchers have used direct liquid-vapor equilibration as a method to measure isotope ratios of the water in soil pores. Typically, this is done by placing soil samples in a fixed container, and allowing the liquid water within the soil to come into isotopic equilibrium with the headspace of the container. Water can also be removed via cryogenic distillation or azeotropic distillation, with the resulting liquid tested for isotope ratios. Alternatively, the isotope ratios of the water vapor can be directly measured using a laser-based water vapor isotope analyzer. Well-established fractionation factors and the isotope ratios in the vapor phase are then used to calculate the isotope ratios in the liquid phase. We propose a setup which would install a single, removable chamber onto a tree, where vapor samples could non-destructively and repeatedly be taken. These vapor samples will be injected into a laser-based isotope analyzer by a recirculating gas conveyance system. A major part of what is presented here is in the procedure of taking vapor samples at 100% relative humidity, appropriately diluting them with completely dry N2 calibration gas, and injecting them into the gas conveyance system without inducing fractionation in the process. This methodology will be helpful in making temporally resolved measurements of the stable isotopes in xylem water, using a setup that can be easily repeated by other research groups. The method is anticipated to find broad application in ecohydrological analyses, and in tracer studies aimed at quantifying age distributions of soil water extracted by plant roots.

  1. Design and analysis of all-dielectric broadband nonpolarizing parallel-plate beam splitters.

    PubMed

    Wang, Wenliang; Xiong, Shengming; Zhang, Yundong

    2007-06-01

    Past research on the all-dielectric nonpolarizing beam splitter is reviewed. With the aid of the needle thin-film synthesis method and the conjugate graduate refine method, three different split ratio nonpolarizing parallel-plate beam splitters over a 200 nm spectral range centered at 550 nm with incidence angles of 45 degrees are designed. The chosen materials component and the initial stack are based on the Costich and Thelen theories. The results of design and analysis show that the designs maintain a very low polarization ratio in the working range of the spectrum and has a reasonable angular field.

  2. All-dielectric broadband non-polarizing parallel plate beam splitter operating between 450-650nm

    NASA Astrophysics Data System (ADS)

    Wang, Wenliang; Xiong, Shenming; Zhang, Yundong

    2007-12-01

    Past research on all-dielectric non-polarizing beam splitter is reviewed. With the aid of needle thin film synthesis method and conjugate graduate refining method, three non-polarizing parallel plate beam splitters with different split ratios over a 200nm spectral range centered at 550nm with incidence angle 45° are designed. Selection of material components and initial stack are based on Costich and Thelen's theory. The results of design and analysis show that it maintains a very low polarization ratio in the working range of spectrum and has a reasonable angular field.

  3. Study of CdTe quantum dots grown using a two-step annealing method

    NASA Astrophysics Data System (ADS)

    Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2006-02-01

    High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.

  4. Computer program for preliminary design analysis of axial-flow turbines

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1972-01-01

    The program method is based on a mean-diameter flow analysis. Input design requirements include power or pressure ratio, flow, temperature, pressure, and speed. Turbine designs are generated for any specified number of stages and for any of three types of velocity diagrams (symmetrical, zero exit swirl, or impulse). Exit turning vanes can be included in the design. Program output includes inlet and exit annulus dimensions, exit temperature and pressure, total and static efficiencies, blading angles, and last-stage critical velocity ratios. The report presents the analysis method, a description of input and output with sample cases, and the program listing.

  5. Improved methods of AAV-mediated gene targeting for human cell lines using ribosome-skipping 2A peptide

    PubMed Central

    Karnan, Sivasundaram; Ota, Akinobu; Konishi, Yuko; Wahiduzzaman, Md; Hosokawa, Yoshitaka; Konishi, Hiroyuki

    2016-01-01

    The adeno-associated virus (AAV)-based targeting vector has been one of the tools commonly used for genome modification in human cell lines. It allows for relatively efficient gene targeting associated with 1–4-log higher ratios of homologous-to-random integration of targeting vectors (H/R ratios) than plasmid-based targeting vectors, without actively introducing DNA double-strand breaks. In this study, we sought to improve the efficiency of AAV-mediated gene targeting by introducing a 2A-based promoter-trap system into targeting constructs. We generated three distinct AAV-based targeting vectors carrying 2A for promoter trapping, each targeting a GFP-based reporter module incorporated into the genome, PIGA exon 6 or PIGA intron 5. The absolute gene targeting efficiencies and H/R ratios attained using these vectors were assessed in multiple human cell lines and compared with those attained using targeting vectors carrying internal ribosome entry site (IRES) for promoter trapping. We found that the use of 2A for promoter trapping increased absolute gene targeting efficiencies by 3.4–28-fold and H/R ratios by 2–5-fold compared to values obtained with IRES. In CRISPR-Cas9-assisted gene targeting using plasmid-based targeting vectors, the use of 2A did not enhance the H/R ratios but did upregulate the absolute gene targeting efficiencies compared to the use of IRES. PMID:26657635

  6. Pluto-Charon: a test of the astrometric approach for finding asteroid satellites

    NASA Astrophysics Data System (ADS)

    Kikwaya, J.-B.; Thuillot, W.; Berthier, J.

    2003-05-01

    The astrometric method to find asteroid satellites is based on the search for the reflex effect on the primary object due to the orbital motion of a possible satellite (Monet & Monet 1998, Kikwaya et al. 2002). As reported by Kikwaya et al. (2003), the astrometric signature of a satellite of 146 Lucina may reach several mas. Spectral analysis might then detect the signal under good conditions of signal/noise ratio, with high quality astrometric measurements and large coverage by different sites of observation. However, the astrometric method cannot be applied to any binary system of asteroids. It depends strongly on the mass ratio of the two bodies and the distance between them (Kikwaya et al. 2002). Pluto-Charon provides a good test of this method. Previous works based on direct imaging of Charon show that its period is 6.357 days and the mass ratio is 0.122 (Wasserman et al. 2000), putting this system into the range that can be observed by our method. Using archived photographic observations (1914-1995) and CCD observations from US Naval Observatory, Flagstaff station (1995-1998), Bordeaux observatory (1996-1997) and Mc Donald Observatory (1997), we are analyzing the position of Pluto to see if its wobble effect due to Charon (amplitude around 95 mas) can be detected and if the orbital period of Charon can be recovered through a spectral analysis. If successful, this will reinforce the ability of our astrometric method to find asteroid satellites.

  7. Adapting Surface Ground Motion Relations to Underground conditions: A case study for the Sudbury Neutrino Observatory in Sudbury, Ontario, Canada

    NASA Astrophysics Data System (ADS)

    Babaie Mahani, A.; Eaton, D. W.

    2013-12-01

    Ground Motion Prediction Equations (GMPEs) are widely used in Probabilistic Seismic Hazard Assessment (PSHA) to estimate ground-motion amplitudes at Earth's surface as a function of magnitude and distance. Certain applications, such as hazard assessment for caprock integrity in the case of underground storage of CO2, waste disposal sites, and underground pipelines, require subsurface estimates of ground motion; at present, such estimates depend upon theoretical modeling and simulations. The objective of this study is to derive correction factors for GMPEs to enable estimation of amplitudes in the subsurface. We use a semi-analytic approach along with finite-difference simulations of ground-motion amplitudes for surface and underground motions. Spectral ratios of underground to surface motions are used to calculate the correction factors. Two predictive methods are used. The first is a semi-analytic approach based on a quarter-wavelength method that is widely used for earthquake site-response investigations; the second is a numerical approach based on elastic finite-difference simulations of wave propagation. Both methods are evaluated using recordings of regional earthquakes by broadband seismometers installed at the surface and at depths of 1400 m and 2100 m in the Sudbury Neutrino Observatory, Canada. Overall, both methods provide a reasonable fit to the peaks and troughs observed in the ratios of real data. The finite-difference method, however, has the capability to simulate ground motion ratios more accurately than the semi-analytic approach.

  8. [Preparation and evaluation of taste masked orally disintegrating tablets with granules made by the wet granulation method].

    PubMed

    Kawano, Yayoi; Ito, Akihiko; Sasatsu, Masanaho; Machida, Yoshiharu; Onishi, Hiraku

    2010-12-01

    Using furosemide (FU) as a model drug, we examined the wet granulation method as a way to improve the taste masking and physical characteristics of orally disintegrating tablets (ODTs). In the wet granulation method, yogurt powder (YO) was used as a corrective and maltitol (MA) was used as a binding agent. The taste masked FU tablets were prepared using the direct compression method. Microcrystalline cellulose (Avicel® PH-302) and mannitol were added as excipients at a mixing ratio of 1/1 by weight. Based on the results of sensory test on taste, the prepared granules markedly improved the taste of FU, and a sufficient masking effect was obtained at the YO/FU ratio of 1 or more. Furthermore, it was found that the masking effect achieved by YO granules made with the wet granulation method was similar to or better than that produced by the granules made with dry granulation method. All types of tablets displayed sufficient hardness (over 3.5×10(-2) kN), and rapidly disintegrating tablets were obtained with YO granules produced at a mixing ratio of FU/YO=1/1, which disintegrated within 20 s. Disintegration time lengthened as the mixing ratio of YO to FU increased. In the mixing ratio of FU/YO=1/1, the hardness of tablets with granules made by the wet granulation method exceeded that of tablets with granules made by the dry granulation method, with minimal differences in disintegration time. The hardness and disintegration time of the tablets with granules made by the wet granulation method could be controlled by varying the compression force. In conclusion, YO was found to be a useful additive for masking unpleasant tastes. FU ODTs with improved taste, rapid disintegration and greater hardness could be prepared with YO-containing granules made by the wet granulation method using MA as a binding agent.

  9. On/off ratio enhancement in single-walled carbon nanotube field-effect transistor by controlling network density via sonication

    NASA Astrophysics Data System (ADS)

    Jang, Ho-Kyun; Choi, Jun Hee; Kim, Do-Hyun; Kim, Gyu Tae

    2018-06-01

    Single-walled carbon nanotube (SWCNT) is generally used as a networked structure in the fabrication of a field-effect transistor (FET) since it is known that one-third of SWCNT is electrically metallic and the remains are semiconducting. In this case, the presence of metallic paths by metallic SWCNT (m-SWCNT) becomes a significant technical barrier which hinders the networks from achieving a semiconducting behavior, resulting in a low on/off ratio. Here, we report on an easy method of controlling the on/off ratio of a FET where semiconducting SWCNT (s-SWCNT) and m-SWCNT constitute networks between source and drain electrodes. A FET with SWCNT networks was simply sonicated under water to control the on/off ratio and network density. As a result, the FET having an almost metallic behavior due to the metallic paths by m-SWCNT exhibited a p-type semiconducting behavior. The on/off ratio ranged from 1 to 9.0 × 104 along sonication time. In addition, theoretical calculations based on Monte-Carlo method and circuit simulation were performed to understand and explain the phenomenon of a change in the on/off ratio and network density by sonication. On the basis of experimental and theoretical results, we found that metallic paths contributed to a high off-state current which leads to a low on/off ratio and that sonication formed sparse SWCNT networks where metallic paths of m-SWCNT were removed, resulting in a high on/off ratio. This method can open a chance to save the device which has been considered as a failed one due to a metallic behavior by a high network density leading to a low on/off ratio.

  10. Dynamic response analysis of a 24-story damped steel structure

    NASA Astrophysics Data System (ADS)

    Feng, Demin; Miyama, Takafumi

    2017-10-01

    In Japanese and Chinese building codes, a two-stage design philosophy, damage limitation (small earthquake, Level 1) and life safety (extreme large earthquake, Level 2), is adopted. It is very interesting to compare the design method of a damped structure based on the two building codes. In the Chinese code, in order to be consistent with the conventional seismic design method, the damped structure is also designed at the small earthquake level. The effect of damper systems is considered by the additional damping ratio concept. The design force will be obtained from the damped design spectrum considering the reduction due to the additional damping ratio. The additional damping ratio by the damper system is usually calculated by a time history analysis method at the small earthquake level. The velocity dependent type dampers such as viscous dampers can function well even in the small earthquake level. But, if steel damper is used, which usually remains elastic in the small earthquake, there will be no additional damping ratio achieved. On the other hand, a time history analysis is used in Japan both for small earthquake and extreme large earthquake level. The characteristics of damper system and ductility of the structure can be modelled well. An existing 24-story steel frame is modified to demonstrate the design process of the damped structure based on the two building codes. Viscous wall type damper and low yield steel panel dampers are studied as the damper system.

  11. Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling.

    PubMed

    Deng, Minghui; Yu, Renping; Wang, Li; Shi, Feng; Yap, Pew-Thian; Shen, Dinggang

    2016-12-01

    Segmentation of brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is crucial for brain structural measurement and disease diagnosis. Learning-based segmentation methods depend largely on the availability of good training ground truth. However, the commonly used 3T MR images are of insufficient image quality and often exhibit poor intensity contrast between WM, GM, and CSF. Therefore, they are not ideal for providing good ground truth label data for training learning-based methods. Recent advances in ultrahigh field 7T imaging make it possible to acquire images with excellent intensity contrast and signal-to-noise ratio. In this paper, the authors propose an algorithm based on random forest for segmenting 3T MR images by training a series of classifiers based on reliable labels obtained semiautomatically from 7T MR images. The proposed algorithm iteratively refines the probability maps of WM, GM, and CSF via a cascade of random forest classifiers for improved tissue segmentation. The proposed method was validated on two datasets, i.e., 10 subjects collected at their institution and 797 3T MR images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Specifically, for the mean Dice ratio of all 10 subjects, the proposed method achieved 94.52% ± 0.9%, 89.49% ± 1.83%, and 79.97% ± 4.32% for WM, GM, and CSF, respectively, which are significantly better than the state-of-the-art methods (p-values < 0.021). For the ADNI dataset, the group difference comparisons indicate that the proposed algorithm outperforms state-of-the-art segmentation methods. The authors have developed and validated a novel fully automated method for 3T brain MR image segmentation. © 2016 American Association of Physicists in Medicine.

  12. Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling.

    PubMed

    Deng, Minghui; Yu, Renping; Wang, Li; Shi, Feng; Yap, Pew-Thian; Shen, Dinggang

    2016-12-01

    Segmentation of brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is crucial for brain structural measurement and disease diagnosis. Learning-based segmentation methods depend largely on the availability of good training ground truth. However, the commonly used 3T MR images are of insufficient image quality and often exhibit poor intensity contrast between WM, GM, and CSF. Therefore, they are not ideal for providing good ground truth label data for training learning-based methods. Recent advances in ultrahigh field 7T imaging make it possible to acquire images with excellent intensity contrast and signal-to-noise ratio. In this paper, the authors propose an algorithm based on random forest for segmenting 3T MR images by training a series of classifiers based on reliable labels obtained semiautomatically from 7T MR images. The proposed algorithm iteratively refines the probability maps of WM, GM, and CSF via a cascade of random forest classifiers for improved tissue segmentation. The proposed method was validated on two datasets, i.e., 10 subjects collected at their institution and 797 3T MR images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Specifically, for the mean Dice ratio of all 10 subjects, the proposed method achieved 94.52% ± 0.9%, 89.49% ± 1.83%, and 79.97% ± 4.32% for WM, GM, and CSF, respectively, which are significantly better than the state-of-the-art methods (p-values < 0.021). For the ADNI dataset, the group difference comparisons indicate that the proposed algorithm outperforms state-of-the-art segmentation methods. The authors have developed and validated a novel fully automated method for 3T brain MR image segmentation.

  13. A new background distribution-based active contour model for three-dimensional lesion segmentation in breast DCE-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Liu, Yiping; Qiu, Tianshuang

    2014-08-15

    Purpose: To develop and evaluate a computerized semiautomatic segmentation method for accurate extraction of three-dimensional lesions from dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) of the breast. Methods: The authors propose a new background distribution-based active contour model using level set (BDACMLS) to segment lesions in breast DCE-MRIs. The method starts with manual selection of a region of interest (ROI) that contains the entire lesion in a single slice where the lesion is enhanced. Then the lesion volume from the volume data of interest, which is captured automatically, is separated. The core idea of BDACMLS is a new signed pressure functionmore » which is based solely on the intensity distribution combined with pathophysiological basis. To compare the algorithm results, two experienced radiologists delineated all lesions jointly to obtain the ground truth. In addition, results generated by other different methods based on level set (LS) are also compared with the authors’ method. Finally, the performance of the proposed method is evaluated by several region-based metrics such as the overlap ratio. Results: Forty-two studies with 46 lesions that contain 29 benign and 17 malignant lesions are evaluated. The dataset includes various typical pathologies of the breast such as invasive ductal carcinoma, ductal carcinomain situ, scar carcinoma, phyllodes tumor, breast cysts, fibroadenoma, etc. The overlap ratio for BDACMLS with respect to manual segmentation is 79.55% ± 12.60% (mean ± s.d.). Conclusions: A new active contour model method has been developed and shown to successfully segment breast DCE-MRI three-dimensional lesions. The results from this model correspond more closely to manual segmentation, solve the weak-edge-passed problem, and improve the robustness in segmenting different lesions.« less

  14. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

    PubMed Central

    Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

  15. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.

    PubMed

    Erdoğan, Semra; Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.

  16. Decentralized PID controller for TITO systems using characteristic ratio assignment with an experimental application.

    PubMed

    Hajare, V D; Patre, B M

    2015-11-01

    This paper presents a decentralized PID controller design method for two input two output (TITO) systems with time delay using characteristic ratio assignment (CRA) method. The ability of CRA method to design controller for desired transient response has been explored for TITO systems. The design methodology uses an ideal decoupler to reduce the interaction. Each decoupled subsystem is reduced to first order plus dead time (FOPDT) model to design independent diagonal controllers. Based on specified overshoot and settling time, the controller parameters are computed using CRA method. To verify performance of the proposed controller, two benchmark simulation examples are presented. To demonstrate applicability of the proposed controller, experimentation is performed on real life interacting coupled tank level system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. ICan: An Optimized Ion-Current-Based Quantification Procedure with Enhanced Quantitative Accuracy and Sensitivity in Biomarker Discovery

    PubMed Central

    2015-01-01

    The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707

  18. EBprot: Statistical analysis of labeling-based quantitative proteomics data.

    PubMed

    Koh, Hiromi W L; Swa, Hannah L F; Fermin, Damian; Ler, Siok Ghee; Gunaratne, Jayantha; Choi, Hyungwon

    2015-08-01

    Labeling-based proteomics is a powerful method for detection of differentially expressed proteins (DEPs). The current data analysis platform typically relies on protein-level ratios, which is obtained by summarizing peptide-level ratios for each protein. In shotgun proteomics, however, some proteins are quantified with more peptides than others, and this reproducibility information is not incorporated into the differential expression (DE) analysis. Here, we propose a novel probabilistic framework EBprot that directly models the peptide-protein hierarchy and rewards the proteins with reproducible evidence of DE over multiple peptides. To evaluate its performance with known DE states, we conducted a simulation study to show that the peptide-level analysis of EBprot provides better receiver-operating characteristic and more accurate estimation of the false discovery rates than the methods based on protein-level ratios. We also demonstrate superior classification performance of peptide-level EBprot analysis in a spike-in dataset. To illustrate the wide applicability of EBprot in different experimental designs, we applied EBprot to a dataset for lung cancer subtype analysis with biological replicates and another dataset for time course phosphoproteome analysis of EGF-stimulated HeLa cells with multiplexed labeling. Through these examples, we show that the peptide-level analysis of EBprot is a robust alternative to the existing statistical methods for the DE analysis of labeling-based quantitative datasets. The software suite is freely available on the Sourceforge website http://ebprot.sourceforge.net/. All MS data have been deposited in the ProteomeXchange with identifier PXD001426 (http://proteomecentral.proteomexchange.org/dataset/PXD001426/). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Influence of stapling the intersegmental planes on lung volume and function after segmentectomy.

    PubMed

    Tao, Hiroyuki; Tanaka, Toshiki; Hayashi, Tatsuro; Yoshida, Kumiko; Furukawa, Masashi; Yoshiyama, Koichi; Okabe, Kazunori

    2016-10-01

    Dividing the intersegmental planes with a stapler during pulmonary segmentectomy leads to volume loss in the remnant segment. The aim of this study was to assess the influence of segment division methods on preserved lung volume and pulmonary function after segmentectomy. Using image analysis software on computed tomography (CT) images of 41 patients, the ratio of remnant segment and ipsilateral lung volume to their preoperative values (R-seg and R-ips) was calculated. The ratio of postoperative actual forced expiratory volume in 1 s (FEV1) and forced vital capacity (FVC) per those predicted values based on three-dimensional volumetry (R-FEV1 and R-FVC) was also calculated. Differences in actual/predicted ratios of lung volume and pulmonary function for each of the division methods were analysed. We also investigated the correlations of the actual/predicted ratio of remnant lung volume with that of postoperative pulmonary function. The intersegmental planes were divided by either electrocautery or with a stapler in 22 patients and with a stapler alone in 19 patients. Mean values of R-seg and R-ips were 82.7 (37.9-140.2) and 104.9 (77.5-129.2)%, respectively. The mean values of R-FEV1 and R-FVC were 103.9 (83.7-135.1) and 103.4 (82.2-125.1)%, respectively. There were no correlations between the actual/predicted ratio of remnant lung volume and pulmonary function based on the division method. Both R-FEV1 and R-FVC were correlated not with R-seg, but with R-ips. Stapling does not lead to less preserved volume or function than electrocautery in the division of the intersegmental planes. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  20. Enhanced Sensitivity to Detection Nanomolar Level of Cu2 + Compared to Spectrophotometry Method by Functionalized Gold Nanoparticles: Design of Sensor Assisted by Exploiting First-order Data with Chemometrics

    NASA Astrophysics Data System (ADS)

    Rasouli, Zolaikha; Ghavami, Raouf

    2018-02-01

    A simple, sensitive and efficient colorimetric assay platform for the determination of Cu2 + was proposed with the aim of developing sensitive detection based on the aggregation of AuNPs in presence of a histamine H2-receptor antagonist (famotidine, FAM) as recognition site. This study is the first to demonstrate that the molar extinction coefficients of the complexes formed by FAM and Cu2 + are very low (by analyzing the chemometrics methods on the first order data arising from different metal to ligand ratio method), leading to the undesirable sensitivity of FAM-based assays. To resolve the problem of low sensitivity, the colorimetry method based on the Cu2 +-induced aggregation of AuNPs functionalized with FAM was introduced. This procedure is accompanied by a color change from bright red to blue which can be observed with the naked eyes. Detection sensitivity obtained by the developed method increased about 100 fold compared with the spectrophotometry method. This sensor exhibited a good linear relation between the absorbance ratios at 670 to 520 nm (A670/520) and the concentration in the range 2-110 nM with LOD = 0.76 nM. The satisfactory analytical performance of the proposed sensor facilitates the development of simple and affordable UV-Vis chemosensors for environmental applications.

  1. An endoscopic diffuse optical tomographic method with high resolution based on the improved FOCUSS method

    NASA Astrophysics Data System (ADS)

    Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei

    2017-02-01

    Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.

  2. Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI

    PubMed Central

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward

    2016-01-01

    Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592

  3. Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.

    PubMed

    Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping

    2017-06-27

    Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.

  4. High-efficient and high-content cytotoxic recording via dynamic and continuous cell-based impedance biosensor technology.

    PubMed

    Hu, Ning; Fang, Jiaru; Zou, Ling; Wan, Hao; Pan, Yuxiang; Su, Kaiqi; Zhang, Xi; Wang, Ping

    2016-10-01

    Cell-based bioassays were effective method to assess the compound toxicity by cell viability, and the traditional label-based methods missed much information of cell growth due to endpoint detection, while the higher throughputs were demanded to obtain dynamic information. Cell-based biosensor methods can dynamically and continuously monitor with cell viability, however, the dynamic information was often ignored or seldom utilized in the toxin and drug assessment. Here, we reported a high-efficient and high-content cytotoxic recording method via dynamic and continuous cell-based impedance biosensor technology. The dynamic cell viability, inhibition ratio and growth rate were derived from the dynamic response curves from the cell-based impedance biosensor. The results showed that the biosensors has the dose-dependent manners to diarrhetic shellfish toxin, okadiac acid based on the analysis of the dynamic cell viability and cell growth status. Moreover, the throughputs of dynamic cytotoxicity were compared between cell-based biosensor methods and label-based endpoint methods. This cell-based impedance biosensor can provide a flexible, cost and label-efficient platform of cell viability assessment in the shellfish toxin screening fields.

  5. Determination of Rayleigh wave ellipticity using single-station and array-based processing of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Workman, Eli Joseph

    We present a single-station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 sec the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 sec, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher ( 2%) and significantly higher (>20%), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e., Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves.

  6. Determination of Rayleigh wave ellipticity across the Earthscope Transportable Array using single-station and array-based processing of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Workman, Eli; Lin, Fan-Chi; Koper, Keith D.

    2017-01-01

    We present a single station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 s the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 s, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher (˜2 per cent) and significantly higher (>20 per cent), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e. Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves and tilt noise.

  7. Calibrating recruitment estimates for mourning doves from harvest age ratios

    USGS Publications Warehouse

    Miller, David A.; Otis, David L.

    2010-01-01

    We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in formulating harvest management strategies.

  8. Variations in male-female infant ratios among births to Canadian- and Indian-born mothers, 1990-2011: a population-based register study

    PubMed Central

    Urquia, Marcelo L.; Ray, Joel G.; Wanigaratne, Susitha; Moineddin, Rahim; O'Campo, Patricia J.

    2016-01-01

    Background: We assessed variations in the male-female infant ratios among births to Canadian-born and Indian-born mothers according to year of birth, province and country of birth of each parent. Methods: In this population-based register study, we analyzed birth certificates of 5 853 970 singleton live births to Canadian-born and 177 990 singleton live births to Indian-born mothers giving birth in Canada from 1990 to 2011. Male-female ratios were stratified by live birth order and plotted by year of birth. Logistic regression was used to assess whether ratios varied between Canadian provinces and according to the birthplace of each parent. The deficit in the number of girls was estimated using bootstrap methods. Results: Among Canadian-born mothers, male-female ratios were about 1.05, with negligible fluctuations by birth order, year and province. Among Indian-born mothers, the overall male-female ratio at the third birth was 1.38 (95% confidence interval [CI] 1.34-1.41) and was 1.66 (95% CI 1.56-1.76) at the fourth or higher-order births. There was little variability in the ratios between provinces. Couples involving at least 1 Indian-born parent had higher than expected male-female ratios at the second and higher-order births, particularly when the father was Indian-born. The deficit in the expected number of girls among Indian immigrants to Canada in the study period was estimated to be 4472 (95% CI 3211-5921). Interpretation: Fewer than expected girls at the third and higher-order births have been born to Indian immigrants across Canada since 1990. This trend was also seen among couples of mixed nativity, including those involving a Canadian-born mother and an Indian-born father. Fathers should be considered when investigating sex ratios at birth. PMID:27398354

  9. Quantifying Spatial and Seasonal Variability in Atmospheric Ammonia with In Situ and Space-Based Observations

    EPA Science Inventory

    Ammonia plays an important role in many biogeochemical processes, yet atmospheric mixing ratios arc not well known. Recently, methods have been developed for retrieving NH3 from space-based observations, but they have not been compared to in situ measurements. We have ...

  10. Anisometric Particle Systems—from Shape Characterization to Suspension Rheology

    NASA Astrophysics Data System (ADS)

    Gregorová, Eva; Pabst, Willi; Vaněrková, Lucie

    2009-06-01

    Methods for the characterization of anisometric particle systems are discussed. For prolate particles, the aspect ratio determination via microscopic image analysis is recalled, and aspect ratio distributions as well as shape-size dependences are commented upon. For oblate particles a simple relation is recalled with can be used to determine an average aspect ratio when size distributions are available from two methods, typically from sedimentation analysis and laser diffraction. The connection between particle shape (aspect ratio) and suspension rheology is outlined and it is shown how a generic procedure, based on Brenner's theory, can be applied to predict the intrinsic viscosity when the aspect ratio is known. On the other hand it is shown, how information on the intrinsic viscosity and the critical solids volume fraction can be extracted from experiments, when the measured concentration dependence of the effective suspension viscosity is adequately interpreted (using the Krieger relation for fitting). The examples mentioned in this paper include systems with oblate or prolate ceramic particles (kaolins, pyrophyllite, wollastonite, silicon carbide) as well as (prolate) pharmaceuticals (mesalamine, ibuprofen, nifuroxazide, paracetamol).

  11. An Optimal Seed Based Compression Algorithm for DNA Sequences

    PubMed Central

    Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan

    2016-01-01

    This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868

  12. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  13. Commonality of drug-associated adverse events detected by 4 commonly used data mining algorithms.

    PubMed

    Sakaeda, Toshiyuki; Kadoyama, Kaori; Minami, Keiko; Okuno, Yasushi

    2014-01-01

    Data mining algorithms have been developed for the quantitative detection of drug-associated adverse events (signals) from a large database on spontaneously reported adverse events. In the present study, the commonality of signals detected by 4 commonly used data mining algorithms was examined. A total of 2,231,029 reports were retrieved from the public release of the US Food and Drug Administration Adverse Event Reporting System database between 2004 and 2009. The deletion of duplicated submissions and revision of arbitrary drug names resulted in a reduction in the number of reports to 1,644,220. Associations with adverse events were analyzed for 16 unrelated drugs, using the proportional reporting ratio (PRR), reporting odds ratio (ROR), information component (IC), and empirical Bayes geometric mean (EBGM). All EBGM-based signals were included in the PRR-based signals as well as IC- or ROR-based ones, and PRR- and IC-based signals were included in ROR-based ones. The PRR scores of PRR-based signals were significantly larger for 15 of 16 drugs when adverse events were also detected as signals by the EBGM method, as were the IC scores of IC-based signals for all drugs; however, no such effect was observed in the ROR scores of ROR-based signals. The EBGM method was the most conservative among the 4 methods examined, which suggested its better suitability for pharmacoepidemiological studies. Further examinations should be performed on the reproducibility of clinical observations, especially for EBGM-based signals.

  14. Pressure Ratio to Thermal Environments

    NASA Technical Reports Server (NTRS)

    Lopez, Pedro; Wang, Winston

    2012-01-01

    A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.

  15. C[subscript p]/C[subscript V] Ratios Measured by the Sound Velocity Method Using Calculator-Based Laboratory Technology

    ERIC Educational Resources Information Center

    Branca, Mario; Soletta, Isabella

    2007-01-01

    The velocity of sound in a gas depends on its temperature, molar mass, and [lambda] = C[subscript p]/C[subscript v], ratio (heat capacity at a constant pressure to heat capacity at constant volume). The [lambda] values for air, oxygen, nitrogen, argon, and carbon dioxide were determined by measuring the velocity of the sound through the gases at…

  16. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  17. Influence of storage duration and processing on chromatic attributes and flavonoid content of moxa floss.

    PubMed

    Lim, Min Yee; Huang, Jian; Zhao, Bai-xiao; Zou, Hui-qin; Yan, Yong-hong

    2016-01-01

    Moxibustion is an important traditional Chinese medicine therapy using heat from ignited moxa floss for disease treatment. The purpose of the present study is to establish a reproducible method to assess the color of moxa floss, discriminate the samples based on chromatic coordinates and explore the relationship between chromatic coordinates and total flavonoid content (TFC). Moxa floss samples of different storage years and production ratios were obtained from a moxa production factory in Henan Province, China. Chromatic coordinates (L*, a* and b*) were analyzed with an ultraviolet-visible spectrophotometer and the chroma (C*) and hue angle (h°) values were calculated. TFC was determined by a colorimetric method. Data were analyzed with correlation, principal component analysis (PCA). Significant differences in the chromatic values and TFC were observed among samples of different storage years and production ratios. Samples of higher production ratio displayed higher chromatic characteristics and lower TFC. Samples of longer storage years contained higher TFC. Preliminary separation of moxa floss production ratio was obtained by means of color feature maps developed using L*-a* or L*-b* as coordinates. PCA allowed the separation of the samples from their storage years and production ratios based on their chromatic characteristics and TFC. The use of a colorimetric technique and CIELAB coordinates coupled with chemometrics can be practical and objective for discriminating moxa floss of different storage years and production ratios. The development of color feature maps could be used as a model for classifying the color grading of moxa floss.

  18. Estimating adult sex ratios in nature.

    PubMed

    Ancona, Sergio; Dénes, Francisco V; Krüger, Oliver; Székely, Tamás; Beissinger, Steven R

    2017-09-19

    Adult sex ratio (ASR, the proportion of males in the adult population) is a central concept in population and evolutionary biology, and is also emerging as a major factor influencing mate choice, pair bonding and parental cooperation in both human and non-human societies. However, estimating ASR is fraught with difficulties stemming from the effects of spatial and temporal variation in the numbers of males and females, and detection/capture probabilities that differ between the sexes. Here, we critically evaluate methods for estimating ASR in wild animal populations, reviewing how recent statistical advances can be applied to handle some of these challenges. We review methods that directly account for detection differences between the sexes using counts of unmarked individuals (observed, trapped or killed) and counts of marked individuals using mark-recapture models. We review a third class of methods that do not directly sample the number of males and females, but instead estimate the sex ratio indirectly using relationships that emerge from demographic measures, such as survival, age structure, reproduction and assumed dynamics. We recommend that detection-based methods be used for estimating ASR in most situations, and point out that studies are needed that compare different ASR estimation methods and control for sex differences in dispersal.This article is part of the themed issue 'Adult sex ratios and reproductive decisions: a critical re-examination of sex differences in human and animal societies'. © 2017 The Author(s).

  19. The Optimum Production Method for Quality Improvement of Recycled Aggregates Using Sulfuric Acid and the Abrasion Method.

    PubMed

    Kim, Haseog; Park, Sangki; Kim, Hayong

    2016-07-29

    There has been increased deconstruction and demolition of reinforced concrete structures due to the aging of the structures and redevelopment of urban areas resulting in the generation of massive amounts of construction. The production volume of waste concrete is projected to increase rapidly over 100 million tons by 2020. However, due to the high cement paste content, recycled aggregates have low density and high absorption ratio. They are mostly used for land reclamation purposes with low added value instead of multiple approaches. This study was performed to determine an effective method to remove cement paste from recycled aggregates by using the abrasion and substituting the process water with acidic water. The aim of this study is to analyze the quality of the recycled fine aggregates produced by a complex method and investigate the optimum manufacturing conditions for recycled fine aggregates based on the design of experiment. The experimental parameters considered were water ratio, coarse aggregate ratio, and abrasion time and, as a result of the experiment, data concerning the properties of recycled sand were obtained. It was found that high-quality recycled fine aggregates can be obtained with 8.57 min of abrasion-crusher time and a recycled coarse aggregate ratio of over 1.5.

  20. Opening of DNA chain due to force applied on different locations.

    PubMed

    Singh, Amar; Modi, Tushar; Singh, Navin

    2016-09-01

    We consider a homogeneous DNA molecule and investigate the effect of random force applied on the unzipping profile of the molecule. How the critical force varies as a function of the chain length or number of base pairs is the objective of this study. In general, the ratio of the critical forces that is applied on the middle of the chain to that which is applied on one of the ends is two. Our study shows that this ratio depends on the length of the chain. This means that the force which is applied to a point can be experienced by a section of the chain. Beyond a length, the base pairs have no information about the applied force. In the case when the chain length is shorter than this length, this ratio may vary. Only in the case when the chain length exceeds a critical length, this ratio is found to be two. Based on the de Gennes formulation, we developed a method to calculate these forces at zero temperature. The exact results at zero temperature match numerical calculations.

  1. Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios

    PubMed Central

    Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang

    2014-01-01

    Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553

  2. Acid-base accounting to predict post-mining drainage quality on surface mines.

    PubMed

    Skousen, J; Simmons, J; McDonald, L M; Ziemkiewicz, P

    2002-01-01

    Acid-base accounting (ABA) is an analytical procedure that provides values to help assess the acid-producing and acid-neutralizing potential of overburden rocks prior to coal mining and other large-scale excavations. This procedure was developed by West Virginia University scientists during the 1960s. After the passage of laws requiring an assessment of surface mining on water quality, ABA became a preferred method to predict post-mining water quality, and permitting decisions for surface mines are largely based on the values determined by ABA. To predict the post-mining water quality, the amount of acid-producing rock is compared with the amount of acid-neutralizing rock, and a prediction of the water quality at the site (whether acid or alkaline) is obtained. We gathered geologic and geographic data for 56 mined sites in West Virginia, which allowed us to estimate total overburden amounts, and values were determined for maximum potential acidity (MPA), neutralization potential (NP), net neutralization potential (NNP), and NP to MPA ratios for each site based on ABA. These values were correlated to post-mining water quality from springs or seeps on the mined property. Overburden mass was determined by three methods, with the method used by Pennsylvania researchers showing the most accurate results for overburden mass. A poor relationship existed between MPA and post-mining water quality, NP was intermediate, and NNP and the NP to MPA ratio showed the best prediction accuracy. In this study, NNP and the NP to MPA ratio gave identical water quality prediction results. Therefore, with NP to MPA ratios, values were separated into categories: <1 should produce acid drainage, between 1 and 2 can produce either acid or alkaline water conditions, and >2 should produce alkaline water. On our 56 surface mined sites, NP to MPA ratios varied from 0.1 to 31, and six sites (11%) did not fit the expected pattern using this category approach. Two sites with ratios <1 did not produce acid drainage as predicted (the drainage was neutral), and four sites with a ratio >2 produced acid drainage when they should not have. These latter four sites were either mined very slowly, had nonrepresentative ABA data, received water from an adjacent underground mine, or had a surface mining practice that degraded the water. In general, an NP to MPA ratio of <1 produced mostly acid drainage sites, between 1 and 2 produced mostly alkaline drainage sites, while NP to MPA ratios >2 produced alkaline drainage with a few exceptions. Using these values, ABA is a good tool to assess overburden quality before surface mining and to predict post-mining drainage quality after mining. The interpretation from ABA values was correct in 50 out of 52 cases (96%), excluding the four anomalous sites, which had acid water for reasons other than overburden quality.

  3. Spectrophotometric Methods for Simultaneous Determination of Sofosbuvir and Ledipasvir (HARVONI Tablet): Comparative Study with Two Generic Products.

    PubMed

    Abo-Talib, Nisreen F; El-Ghobashy, Mohamed R; Tammam, Marwa H

    2017-07-01

    Sofosbuvir and ledipasvir are the first drugs in a combination pill to treat chronic hepatitis C virus. Simple, sensitive, and rapid spectrophotometric methods are presented for the determination of sofosbuvir and ledipasvir in their combined dosage form. These methods were based on direct measurement of ledipasvir at 333 nm (due to the lack of interference of sofosbuvir) over a concentration range of 4.0-14.0 µg/mL, with a mean recovery of 100.78 ± 0.64%. Sofosbuvir was determined, without prior separation, by third-derivative values at 281 nm; derivative ratio values at 265.8 nm utilizing 5.0 µg/mL ledipasvir as a divisor; the ratio difference method using values at 270 and 250 nm using 5.0 µg/mL ledipasvir as a divisor; and the ratio subtraction method using values at 261 nm. These methods were found to be linear for sofosbuvir over a concentration range of 5.0-35.0 µg/mL. The suggested methods were validated according to International Conference on Harmonization guidelines. Statistical analysis of the results showed no significant difference between the proposed methods and the manufacturer's LC method of determination with respect to accuracy and precision. These methods were used to compare the equivalence of an innovator drug dosage form and two generic drug dosage forms of the same strength.

  4. Influence of ground level SO2 on the diffuse to direct irradiance ratio in the middle ultraviolet

    NASA Technical Reports Server (NTRS)

    Klenk, K. F.; Green, A. E. S.

    1977-01-01

    The dependence of the ratio of the diffuse to direct irradiances at the ground were examined for a wavelength of 315.1 nm. A passive remote sensing method based on ratio measurements for obtaining the optical thickness of SO2 in the vertical column was proposed. If, in addition to the ratio measurements, the SO2 density at the ground is determining using an appropriate point-sampling technique then some inference on the vertical extent of SO2 can be drawn. An analytic representation is presented of the ratio for a wide range of SO2 and aerosol optical thicknesses and solar zenith angles which can be inverted algebraically to give the SO2 optical thickness in terms of the measured ratio, aerosol optical thickness and solar zenith angle.

  5. An internal reference model-based PRF temperature mapping method with Cramer-Rao lower bound noise performance analysis.

    PubMed

    Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng

    2009-11-01

    The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.

  6. [The diagnostic value of ultrasonic elastography and ultrasonography comprehensive score in cervical lesions].

    PubMed

    Lu, R; Xiao, Y

    2017-07-18

    Objective: To evaluate the clinical value of ultrasonic elastography and ultrasonography comprehensive scoring method in the diagnosis of cervical lesions. Methods: A total of 116 patients were selected from the Department of Gynecology of the first hospital affiliated with Central South University from March 2014 to September 2015.All of the lesions were preoperatively examined by Doppler Ultrasound and elastography.The elasticity score was determined by a 5-point scoring method. Calculation of the strain ratio was based on a comparison of the average strain measured in the lesion with the adjacent tissue of the same depth, size, and shape.All these ultrasonic parameters were quantified, added, and arrived at ultrasonography comprehensive scores.To use surgical pathology as the gold standard, the sensitivity, specificity, accuracy of Doppler Ultrasound, elasticity score and strain ratio methods and ultrasonography comprehensive scoring method were comparatively analyzed. Results: (1) The sensitivity, specificity, and accuracy of Doppler Ultrasound in diagnosing cervical lesions were 82.89% (63/76), 85.0% (34/40), and 83.62% (97/116), respectively.(2) The sensitivity, specificity, and accuracy of the elasticity score method were 77.63% (59/76), 82.5% (33/40), and 79.31% (92/116), respectively; the sensitivity, specificity, and accuracy of the strain ratio measure method were 84.21% (64/76), 87.5% (35/40), and 85.34% (99/116), respectively.(3) The sensitivity, specificity, and accuracy of ultrasonography comprehensive scoring method were 90.79% (69/76), 92.5% (37/40), and 91.38% (106/116), respectively. Conclusion: (1) It was obvious that ultrasonic elastography had certain diagnostic value in cervical lesions. Strain ratio measurement can be more objective than elasticity score method.(2) The combined application of ultrasonography comprehensive scoring method, ultrasonic elastography and conventional sonography was more accurate than single parameter.

  7. Validated stability-indicating spectrophotometric methods for the determination of Silodosin in the presence of its degradation products.

    PubMed

    Boltia, Shereen A; Abdelkawy, Mohammed; Mohammed, Taghreed A; Mostafa, Nahla N

    2018-09-05

    Five simple, rapid, accurate, and precise spectrophotometric methods are developed for the determination of Silodosin (SLD) in the presence of its acid induced and oxidative induced degradation products. Method A is based on dual wavelength (DW) method; two wavelengths are selected at which the absorbance of the oxidative induced degradation product is the same, so wavelengths 352 and 377 nm are used to determine SLD in the presence of its oxidative induced degradation product. Method B depends on induced dual wavelength theory (IDW), which is based on selecting two wavelengths on the zero-order spectrum of SLD where the difference in absorbance between them for the spectrum of acid induced degradation products is not equal to zero so through multiplying by the equality factor, the absorption difference is made to be zero for the acid induced degradation product while it is still significant for SLD. Method C is first derivative ( 1 D) spectrophotometry of SLD and its degradation products. Peak amplitudes are measured at 317 and 357 nm. Method D is ratio difference spectrophotometry (RD) where the drug is determined by the difference in amplitude between two selected wavelengths, at 350 and 277 nm for the ratio spectrum of SLD and its acid induced degradation products while for the ratio spectrum of SLD and its oxidative induced degradation products the difference in amplitude is measured at 345 and 292 nm. Method E depends on measuring peak amplitudes of the first derivative of the ratio ( 1 DD) where peak amplitudes are measured at 330 nm in the presence of the acid induced degradation product and measured by peak to peak technique at 326 and 369 nm in the presence of the oxidative induced degradation product. The proposed methods are validated according to ICH recommendations. The calibration curves for all the proposed methods are linear over a concentration range of 5-70 μg/mL. The selectivity of the proposed methods was tested using different laboratory prepared mixtures of SLD with either its acid induced or oxidative induced degradation products showing specificity of SLD with accepted recovery values. The proposed methods have been successfully applied to the analysis of SLD in pharmaceutical dosage forms without interference from additives. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. A scaling transformation for classifier output based on likelihood ratio: Applications to a CAD workstation for diagnosis of breast cancer

    PubMed Central

    Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei

    2012-01-01

    Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651

  9. IRT Model Selection Methods for Dichotomous Items

    ERIC Educational Resources Information Center

    Kang, Taehoon; Cohen, Allan S.

    2007-01-01

    Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…

  10. LASER BIOLOGY AND MEDICINE: Laser analysis of the 13C/12C isotope ratio in CO2 in exhaled air

    NASA Astrophysics Data System (ADS)

    Stepanov, E. V.

    2002-11-01

    Tunable diode lasers (TDLs) are applied to the diagnostics of gastroenterological diseases using respiratory tests and preparations enriched with the stable 13C isotope. This method of the analysis of the 13C/12C isotope ratio in CO2 in exhaled air is based on the selective measurement of the resonance absorption at the vibrational — rotational structure of 12CO2 and 13CO2. The CO2 transmission spectra in the region of 4.35 μm were measured with a PbEuSe double-heterostructure TDL. The accuracy of carbon isotope ratio measurements in CO2 of exhaled air performed with the TDL was ~0.5%. The data of clinical tests of the developed laser-based analyser are presented.

  11. Prediction on the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase based on gene expression programming.

    PubMed

    Li, Yuqin; You, Guirong; Jia, Baoxiu; Si, Hongzong; Yao, Xiaojun

    2014-01-01

    Quantitative structure-activity relationships (QSAR) were developed to predict the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase via heuristic method (HM) and gene expression programming (GEP). The descriptors of 33 pyrrolidine derivatives were calculated by the software CODESSA, which can calculate quantum chemical, topological, geometrical, constitutional, and electrostatic descriptors. HM was also used for the preselection of 5 appropriate molecular descriptors. Linear and nonlinear QSAR models were developed based on the HM and GEP separately and two prediction models lead to a good correlation coefficient (R (2)) of 0.93 and 0.94. The two QSAR models are useful in predicting the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase during the discovery of new anticancer drugs and providing theory information for studying the new drugs.

  12. A novel hybrid MCDM model for performance evaluation of research and technology organizations based on BSC approach.

    PubMed

    Varmazyar, Mohsen; Dehghanbaghi, Maryam; Afkhami, Mehdi

    2016-10-01

    Balanced Scorecard (BSC) is a strategic evaluation tool using both financial and non-financial indicators to determine the business performance of organizations or companies. In this paper, a new integrated approach based on the Balanced Scorecard (BSC) and multi-criteria decision making (MCDM) methods are proposed to evaluate the performance of research centers of research and technology organization (RTO) in Iran. Decision-Making Trial and Evaluation Laboratory (DEMATEL) are employed to reflect the interdependencies among BSC perspectives. Then, Analytic Network Process (ANP) is utilized to weight the indices influencing the considered problem. In the next step, we apply four MCDM methods including Additive Ratio Assessment (ARAS), Complex Proportional Assessment (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for ranking of alternatives. Finally, the utility interval technique is applied to combine the ranking results of MCDM methods. Weighted utility intervals are computed by constructing a correlation matrix between the ranking methods. A real case is presented to show the efficacy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Label-free and pH-sensitive colorimetric materials for the sensing of urea

    NASA Astrophysics Data System (ADS)

    Li, Lu; Long, Yue; Gao, Jin-Ming; Song, Kai; Yang, Guoqiang

    2016-02-01

    This communication demonstrates a facile method for naked-eye detection of urea based on the structure color change of pH-sensitive photonic crystals. The insertion of urease provides excellent selectivity over other molecules. The detection of urea in different concentration ranges could be realized by changing the molar ratio between the functional monomer and cross-linker.This communication demonstrates a facile method for naked-eye detection of urea based on the structure color change of pH-sensitive photonic crystals. The insertion of urease provides excellent selectivity over other molecules. The detection of urea in different concentration ranges could be realized by changing the molar ratio between the functional monomer and cross-linker. Electronic supplementary information (ESI) available: Materials and chemicals, characterization, experimental details, and SEM images. See DOI: 10.1039/c5nr07690k

  14. Compaction of rolling circle amplification products increases signal integrity and signal-to-noise ratio

    PubMed Central

    Clausson, Carl-Magnus; Arngården, Linda; Ishaq, Omer; Klaesson, Axel; Kühnemund, Malte; Grannas, Karin; Koos, Björn; Qian, Xiaoyan; Ranefall, Petter; Krzywkowski, Tomasz; Brismar, Hjalmar; Nilsson, Mats; Wählby, Carolina; Söderberg, Ola

    2015-01-01

    Rolling circle amplification (RCA) for generation of distinct fluorescent signals in situ relies upon the self-collapsing properties of single-stranded DNA in commonly used RCA-based methods. By introducing a cross-hybridizing DNA oligonucleotide during rolling circle amplification, we demonstrate that the fluorophore-labeled RCA products (RCPs) become smaller. The reduced size of RCPs increases the local concentration of fluorophores and as a result, the signal intensity increases together with the signal-to-noise ratio. Furthermore, we have found that RCPs sometimes tend to disintegrate and may be recorded as several RCPs, a trait that is prevented with our cross-hybridizing DNA oligonucleotide. These effects generated by compaction of RCPs improve accuracy of visual as well as automated in situ analysis for RCA based methods, such as proximity ligation assays (PLA) and padlock probes. PMID:26202090

  15. Fabrication of Microcapsules for Dye-Doped Polymer-Dispersed Liquid Crystal-Based Smart Windows.

    PubMed

    Kim, Mingyun; Park, Kyun Joo; Seok, Seunghwan; Ok, Jong Min; Jung, Hee-Tae; Choe, Jaehoon; Kim, Do Hyun

    2015-08-19

    A dye-doped polymer-dispersed liquid crystal (PDLC) is an attractive material for application in smart windows. Smart windows using a PDLC can be operated simply and have a high contrast ratio compared to those of other devices that employed photochromic or thermochromic material. However, in conventional dye-doped PDLC methods, dye contamination can cause problems and has a limited degree of commercialization of electric smart windows. Here, we report on an approach to resolve dye-related problems by encapsulating the dye in monodispersed capsules. By encapsulation, a fabricated dye-doped PDLC had a contrast ratio of >120 at 600 nm. This fabrication method of encapsulating the dye in a core-shell structured microcapsule in a dye-doped PDLC device provides a practical platform for dye-doped PDLC-based smart windows.

  16. NGS-based likelihood ratio for identifying contributors in two- and three-person DNA mixtures.

    PubMed

    Chan Mun Wei, Joshua; Zhao, Zicheng; Li, Shuai Cheng; Ng, Yen Kaow

    2018-06-01

    DNA fingerprinting, also known as DNA profiling, serves as a standard procedure in forensics to identify a person by the short tandem repeat (STR) loci in their DNA. By comparing the STR loci between DNA samples, practitioners can calculate a probability of match to identity the contributors of a DNA mixture. Most existing methods are based on 13 core STR loci which were identified by the Federal Bureau of Investigation (FBI). Analyses based on these loci of DNA mixture for forensic purposes are highly variable in procedures, and suffer from subjectivity as well as bias in complex mixture interpretation. With the emergence of next-generation sequencing (NGS) technologies, the sequencing of billions of DNA molecules can be parallelized, thus greatly increasing throughput and reducing the associated costs. This allows the creation of new techniques that incorporate more loci to enable complex mixture interpretation. In this paper, we propose a computation for likelihood ratio that uses NGS (next generation sequencing) data for DNA testing on mixed samples. We have applied the method to 4480 simulated DNA mixtures, which consist of various mixture proportions of 8 unrelated whole-genome sequencing data. The results confirm the feasibility of utilizing NGS data in DNA mixture interpretations. We observed an average likelihood ratio as high as 285,978 for two-person mixtures. Using our method, all 224 identity tests for two-person mixtures and three-person mixtures were correctly identified. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Assessment of Listeria sp. Interference Using a Molecular Assay To Detect Listeria monocytogenes in Food.

    PubMed

    Zittermann, Sandra I; Stanghini, Brenda; See, Ryan Soo; Melano, Roberto G; Boleszczuk, Peter; Murphy, Allana; Maki, Anne; Mallo, Gustavo V

    2016-01-01

    Detection of Listeria monocytogenes in food is currently based on enrichment methods. When L. monocytogenes is present with other Listeria species in food, the species compete during the enrichment process. Overgrowth competition of the nonpathogenic Listeria species might result in false-negative results obtained with the current reference methods. This potential issue was noted when 50 food samples artificially spiked with L. monocytogenes were tested with a real-time PCR assay and Canada's current reference method, MFHPB-30. Eleven of the samples studied were from foods naturally contaminated with Listeria species other than those used for spiking. The real-time PCR assay detected L. monocytogenes in all 11 of these samples; however, only 6 of these samples were positive by the MFHPB-30 method. To determine whether L. monocytogenes detection can be affected by other species of the same genus due to competition, an L. monocytogenes strain and a Listeria innocua strain with a faster rate of growth in the enrichment broth were artificially coinoculated at different ratios into ground pork meat samples and cultured according to the MFHPB-30 method. L. monocytogenes was detected only by the MFHPB-30 method when L. monocytogenes/L. innocua ratios were 6.0 or higher. In contrast, using the same enrichments, the real-time PCR assay detected L. monocytogenes at ratios as low as 0.6. Taken together, these findings support the hypothesis that L. monocytogenes can be outcompeted by L. innocua during the MFHPB-30 enrichment phase. However, more reliable detection of L. monocytogenes in this situation can be achieved by a PCR-based method mainly because of its sensitivity.

  18. Across-plane thermal characterization of films based on amplitude-frequency profile in photothermal technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shen; Wang, Xinwei, E-mail: xwang3@iastate.edu

    2014-10-15

    This work develops an amplitude method for the photothermal (PT) technique to analyze the amplitude of the thermal radiation signal from the surface of a multilayered film sample. The thermal conductivity of any individual layer in the sample can be thereby determined. Chemical vapor deposited SiC film samples (sample 1 to 3: 2.5 to 3.5 μm thickness) with different ratios of Si to C and thermally oxidized SiO{sub 2} film (500 nm thickness) on silicon substrates are studied using the amplitude method. The determined thermal conductivity based on the amplitude method is 3.58, 3.59, and 2.59 W/m⋅K for sample 1more » to 3 with ±10% uncertainty. These results are verified by the phase shift method, and sound agreement is obtained. The measured thermal conductivity (k) of SiC is much lower than the value of bulk SiC. The large k reduction is caused by the structure difference revealed by Raman spectroscopy. For the SiO{sub 2} film, the thermal conductivity is measured to be 1.68 ± 0.17 W/m⋅K, a little higher than that obtained by the phase shift method: 1.31 ± 0.06 W/m⋅K. Sensitivity analysis of thermal conductivity and interfacial resistance is conducted for the amplitude method. Its weak-sensitivity to the thermal contact resistance, enables the amplitude method to determine the thermal conductivity of a film sample with little effect from the interface thermal resistance between the film and substrate. The normalized amplitude ratio at a high frequency to that at a low frequency provides a reliable way to evaluate the effusivity ratio of the film to that of the substrate.« less

  19. An improved method for predicting the evolution of the characteristic parameters of an information system

    NASA Astrophysics Data System (ADS)

    Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.

    2018-03-01

    The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.

  20. Intra-generational Redistribution under Public Pension Planning Based on Generation-based Funding Scheme

    NASA Astrophysics Data System (ADS)

    Banjo, Daisuke; Tamura, Hiroyuki; Murata, Tadahiko

    In this paper, we propose a method of determining the pension in the generation-based funding scheme. In this proposal, we include two types of pensions in the scheme. One is the payment-amount related pension and the other is the payment-frequency related pension. We set the ratio of the total amount of payment-amount related pension to the total amount of both pensions, and simulate income gaps and the relationship between contributions and benefits for each individual when the proposed method is applied.

  1. Comparative study of protoporphyrin IX fluorescence image enhancement methods to improve an optical imaging system for oral cancer detection

    NASA Astrophysics Data System (ADS)

    Jiang, Ching-Fen; Wang, Chih-Yu; Chiang, Chun-Ping

    2011-07-01

    Optoelectronics techniques to induce protoporphyrin IX fluorescence with topically applied 5-aminolevulinic acid on the oral mucosa have been developed to noninvasively detect oral cancer. Fluorescence imaging enables wide-area screening for oral premalignancy, but the lack of an adequate fluorescence enhancement method restricts the clinical imaging application of these techniques. This study aimed to develop a reliable fluorescence enhancement method to improve PpIX fluorescence imaging systems for oral cancer detection. Three contrast features, red-green-blue reflectance difference, R/B ratio, and R/G ratio, were developed first based on the optical properties of the fluorescence images. A comparative study was then carried out with one negative control and four biopsy confirmed clinical cases to validate the optimal image processing method for the detection of the distribution of malignancy. The results showed the superiority of the R/G ratio in terms of yielding a better contrast between normal and neoplastic tissue, and this method was less prone to errors in detection. Quantitative comparison with the clinical diagnoses in the four neoplastic cases showed that the regions of premalignancy obtained using the proposed method accorded with the expert's determination, suggesting the potential clinical application of this method for the detection of oral cancer.

  2. Electrodialytic in-line preconcentration for ionic solute analysis.

    PubMed

    Ohira, Shin-Ichi; Yamasaki, Takayuki; Koda, Takumi; Kodama, Yuko; Toda, Kei

    2018-04-01

    Preconcentration is an effective way to improve analytical sensitivity. Many types of methods are used for enrichment of ionic solute analytes. However, current methods are batchwise and include procedures such as trapping and elution. In this manuscript, we propose in-line electrodialytic enrichment of ionic solutes. The method can enrich ionic solutes within seconds by quantitative transfer of analytes from the sample solution to the acceptor solution under an electric field. Because of quantitative ion transfer, the enrichment factor (the ratio of the concentration in the sample and to that in the obtained acceptor solution) only depends on the flow rate ratio of the sample solution to the acceptor solution. The ratios of the concentrations and flow rates are equal for ratios up to 70, 20, and 70 for the tested ionic solutes of inorganic cations, inorganic anions, and heavy metal ions, respectively. The sensitivity of ionic solute determinations is also improved based on the enrichment factor. The method can also simultaneously achieve matrix isolation and enrichment. The method was successively applied to determine the concentrations of trace amounts of chloroacetic acids in tap water. The regulated concentration levels cannot be determined by conventional high-performance liquid chromatography with ultraviolet detection (HPLC-UV) without enrichment. However, enrichment with the present method is effective for determination of tap water quality by improving the limits of detection of HPLC-UV. The standard addition test with real tap water samples shows good recoveries (94.9-109.6%). Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Mapping quantitative trait loci for traits defined as ratios.

    PubMed

    Yang, Runqing; Li, Jiahan; Xu, Shizhong

    2008-03-01

    Many traits are defined as ratios of two quantitative traits. Methods of QTL mapping for regular quantitative traits are not optimal when applied to ratios due to lack of normality for traits defined as ratios. We develop a new method of QTL mapping for traits defined as ratios. The new method uses a special linear combination of the two component traits, and thus takes advantage of the normal property of the new variable. Simulation study shows that the new method can substantially increase the statistical power of QTL detection relative to the method which treats ratios as regular quantitative traits. The new method also outperforms the method that uses Box-Cox transformed ratio as the phenotype. A real example of QTL mapping for relative growth rate in soybean demonstrates that the new method can detect more QTL than existing methods of QTL mapping for traits defined as ratios.

  5. [A new method of evaluating the utilization of nutrients (carbohydrates, amino acids and fatty acids) on the plastic and energy goals in the animal body].

    PubMed

    Virovets, O A; Gapparov, M M

    1998-01-01

    With use of a new method, based on detection in blood serum of radioactivity of water, formed from tritium marked precursors--glucose, amino acids (valine, serine, histidine) and palmitine acid--their distribution on oxidizing and anabolic ways of metabolism was determined. The work was carried out on laboratory rats. In young pubertal rats the ratio of flows on these ways for glucose was found equal 2.83, i.e. it in a greater degree was used as energy substratum. On the contrary, for palmitine acid this ratio was equal 0.10--it was comprised in a plastic material of organism in a greater degree. For serine, histidine and valine it is equal 0.34, 0.71 and 0.46, accordingly. In growing rats the distribution of flows was shifted aside of anabolic way: the ratio of flows is equal 0.19; in old rats--aside of oxidizing: a ratio of flows is equal 0.71.

  6. Influence of fuel-nitrate ratio on the structural and magnetic properties of Fe and Cr based spinels prepared by solution self combustion method

    NASA Astrophysics Data System (ADS)

    Sijo, A. K.

    2017-11-01

    In this study, we report the synthesis of nano-sized CoCrFeO4 and NiCrFeO4 using the solution self combustion method and the variation in the magnetic and structural properties with different fuel to nitrate ratios-fuel lean, fuel rich and stoichiometric. Citric acid is used as the fuel. XRD analysis of the samples confirms the formation of pure spinel phased nanoparticles in fuel rich and stoichiometric cases. But CoCrFeO4 and NiCrFeO4 samples prepared under the fuel lean condition show the presence of a small amount of impurity phases: α-Ni in fuel lean NiCrFeO4 and α-Co in fuel lean CoCrFeO4. Fuel lean samples possess high magnetic saturation. The stoichiometric ratio results in finest nano-particles and structural and magnetic properties are very critically dependent on fuel to nitrate ratio.

  7. Novel method for measurement of transistor gate length using energy-filtered transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Lee, Sungho; Kim, Tae-Hoon; Kang, Jonghyuk; Yang, Cheol-Woong

    2016-12-01

    As the feature size of devices continues to decrease, transmission electron microscopy (TEM) is becoming indispensable for measuring the critical dimension (CD) of structures. Semiconductors consist primarily of silicon-based materials such as silicon, silicon dioxide, and silicon nitride, and the electrons transmitted through a plan-view TEM sample provide diverse information about various overlapped silicon-based materials. This information is exceedingly complex, which makes it difficult to clarify the boundary to be measured. Therefore, we propose a simple measurement method using energy-filtered TEM (EF-TEM). A precise and effective measurement condition was obtained by determining the maximum value of the integrated area ratio of the electron energy loss spectrum at the boundary to be measured. This method employs an adjustable slit allowing only electrons with a certain energy range to pass. EF-TEM imaging showed a sharp transition at the boundary when the energy-filter’s passband centre was set at 90 eV, with a slit width of 40 eV. This was the optimum condition for the CD measurement of silicon-based materials involving silicon nitride. Electron energy loss spectroscopy (EELS) and EF-TEM images were used to verify this method, which makes it possible to measure the transistor gate length in a dynamic random access memory manufactured using 35 nm process technology. This method can be adapted to measure the CD of other non-silicon-based materials using the EELS area ratio of the boundary materials.

  8. Development of Decision Making Algorithm for Control of Sea Cargo Containers by ``TAGGED'' Neutron Method

    NASA Astrophysics Data System (ADS)

    Anan'ev, A. A.; Belichenko, S. G.; Bogolyubov, E. P.; Bochkarev, O. V.; Petrov, E. V.; Polishchuk, A. M.; Udaltsov, A. Yu.

    2009-12-01

    Nowadays in Russia and abroad there are several groups of scientists, engaged in development of systems based on "tagged" neutron method (API method) and intended for detection of dangerous materials, including high explosives (HE). Particular attention is paid to possibility of detection of dangerous objects inside a sea cargo container. Energy gamma-spectrum, registered from object under inspection is used for determination of oxygen/carbon and nitrogen/carbon chemical ratios, according to which dangerous object is distinguished from not dangerous one. Material of filled container, however, gives rise to additional effects of rescattering and moderation of 14 MeV primary neutrons of generator, attenuation of secondary gamma-radiation from reactions of inelastic neutron scattering on objects under inspection. These effects lead to distortion of energy gamma-response from examined object and therefore prevent correct recognition of chemical ratios. These difficulties are taken into account in analytical method, presented in the paper. Method has been validated against experimental data, obtained by the system for HE detection in sea cargo, based on API method and developed in VNIIA. Influence of shielding materials on results of HE detection and identification is considered. Wood and iron were used as shielding materials. Results of method application for analysis of experimental data on HE simulator measurement (tetryl, trotyl, hexogen) are presented.

  9. A new method of converter transformer protection without commutation failure

    NASA Astrophysics Data System (ADS)

    Zhang, Jiayu; Kong, Bo; Liu, Mingchang; Zhang, Jun; Guo, Jianhong; Jing, Xu

    2018-01-01

    With the development of AC / DC hybrid transmission technology, converter transformer as nodes of AC and DC conversion of HVDC transmission technology, its reliable safe and stable operation plays an important role in the DC transmission. As a common problem of DC transmission, commutation failure poses a serious threat to the safe and stable operation of power grid. According to the commutation relation between the AC bus voltage of converter station and the output DC voltage of converter, the generalized transformation ratio is defined, and a new method of converter transformer protection based on generalized transformation ratio is put forward. The method uses generalized ratio to realize the on-line monitoring of the fault or abnormal commutation components, and the use of valve side of converter transformer bushing CT current characteristics of converter transformer fault accurately, and is not influenced by the presence of commutation failure. Through the fault analysis and EMTDC/PSCAD simulation, the protection can be operated correctly under the condition of various faults of the converter.

  10. Micro-structural characterization of the hydration products of bauxite-calcination-method red mud-coal gangue based cementitious materials.

    PubMed

    Liu, Xiaoming; Zhang, Na; Yao, Yuan; Sun, Henghu; Feng, Huan

    2013-11-15

    In this research, the micro-structural characterization of the hydration products of red mud-coal gangue based cementitious materials has been investigated through SEM-EDS, (27)Al MAS NMR and (29)Si MAS NMR techniques, in which the used red mud was derived from the bauxite calcination method. The results show that the red mud-coal gangue based cementitious materials mainly form fibrous C-A-S-H gel, needle-shaped/rod-like AFt in the early hydration period. With increasing of the hydration period, densification of the pastes were promoted resulting in the development of strength. EDS analysis shows that with the Ca/Si of red mud-coal gangue based cementitious materials increases, the average Ca/Si and Ca/(Si+Al) atomic ratio of C-A-S-H gel increases, while the average Al/Si atomic ratio of C-A-S-H gel decreases. MAS NMR analysis reveals that Al in the hydration products of red mud-coal gangue based cementitious materials exists in the forms of Al(IV) and Al(VI), but mainly in the form of Al(VI). Increasing the Ca/Si ratio of raw material promotes the conversion of [AlO4] to [AlO6] and inhibits the combination between [AlO4] and [SiO4] to form C-A-S-H gel. Meanwhile, the polymerization degree of [SiO4] in the hydration products declines. Published by Elsevier B.V.

  11. Poisson regression models outperform the geometrical model in estimating the peak-to-trough ratio of seasonal variation: a simulation study.

    PubMed

    Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C

    2011-12-01

    Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. A New Feature-Enhanced Speckle Reduction Method Based on Multiscale Analysis for Ultrasound B-Mode Imaging.

    PubMed

    Kang, Jinbum; Lee, Jae Young; Yoo, Yangmo

    2016-06-01

    Effective speckle reduction in ultrasound B-mode imaging is important for enhancing the image quality and improving the accuracy in image analysis and interpretation. In this paper, a new feature-enhanced speckle reduction (FESR) method based on multiscale analysis and feature enhancement filtering is proposed for ultrasound B-mode imaging. In FESR, clinical features (e.g., boundaries and borders of lesions) are selectively emphasized by edge, coherence, and contrast enhancement filtering from fine to coarse scales while simultaneously suppressing speckle development via robust diffusion filtering. In the simulation study, the proposed FESR method showed statistically significant improvements in edge preservation, mean structure similarity, speckle signal-to-noise ratio, and contrast-to-noise ratio (CNR) compared with other speckle reduction methods, e.g., oriented speckle reducing anisotropic diffusion (OSRAD), nonlinear multiscale wavelet diffusion (NMWD), the Laplacian pyramid-based nonlinear diffusion and shock filter (LPNDSF), and the Bayesian nonlocal means filter (OBNLM). Similarly, the FESR method outperformed the OSRAD, NMWD, LPNDSF, and OBNLM methods in terms of CNR, i.e., 10.70 ± 0.06 versus 9.00 ± 0.06, 9.78 ± 0.06, 8.67 ± 0.04, and 9.22 ± 0.06 in the phantom study, respectively. Reconstructed B-mode images that were developed using the five speckle reduction methods were reviewed by three radiologists for evaluation based on each radiologist's diagnostic preferences. All three radiologists showed a significant preference for the abdominal liver images obtained using the FESR methods in terms of conspicuity, margin sharpness, artificiality, and contrast, p<0.0001. For the kidney and thyroid images, the FESR method showed similar improvement over other methods. However, the FESR method did not show statistically significant improvement compared with the OBNLM method in margin sharpness for the kidney and thyroid images. These results demonstrate that the proposed FESR method can improve the image quality of ultrasound B-mode imaging by enhancing the visualization of lesion features while effectively suppressing speckle noise.

  13. [MRI-Based Ratio of Fetal Lung to Body Volume as New Prognostic Marker for Chronic Lung Disease in Patients with Congenital Diaphragmatic Hernia].

    PubMed

    Winkler, Melissa M; Weis, Meike; Henzler, Claudia; Weiß, Christel; Kehl, Sven; Schoenberg, Stefan O; Neff, Wolfgang; Schaible, Thomas

    2017-03-01

    Background Our aim was to evaluate the prognostic value of magnetic resonance imaging (MRI)-based ratio of fetal lung volume (FLV) to fetal body volume (FBV) as a marker for development of chronic lung disease (CLD) in fetuses with congenital diaphragmatic hernia (CDH). Patients and Methods FLV and FBV were measured and the individual FLV/FBV ratio was calculated in 132 fetuses. Diagnosis of CLD was established following prespecified criteria and graded into mild/moderate/severe if present. Logistic regression analysis was used to calculate the probability of postnatal development of CLD in dependence of the FLV/FBV ratio. Receiver operating characteristic curves were analysed by calculating the area under the curve to evaluate the prognostic accuracy of this marker. Results 61 of 132 fetuses developed CLD (46.21%). The FLV/FBV ratio was significantly lower in fetuses with CLD (p=0.0008; AUC 0.743). Development of CLD was significantly associated with thoracic herniation of liver parenchyma (p<0.0001), requirement of extracorporal membrane oxygenation (ECMO) (p<0.0001) and gestational age at delivery (p=0.0052). Conclusion The MRI-based ratio of FLV to FBV is a highly valuable prenatal parameter for development of CLD. The ratio is helpful for early therapeutic decisions by estimating the probability to develop CLD. Perinatally, gestational age at delivery and ECMO requirement are useful additional parameters to further improve prediction of CLD. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Development and interlaboratory validation of quantitative polymerase chain reaction method for screening analysis of genetically modified soybeans.

    PubMed

    Takabatake, Reona; Onishi, Mari; Koiwa, Tomohiro; Futo, Satoshi; Minegishi, Yasutaka; Akiyama, Hiroshi; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi

    2013-01-01

    A novel real-time polymerase chain reaction (PCR)-based quantitative screening method was developed for three genetically modified soybeans: RRS, A2704-12, and MON89788. The 35S promoter (P35S) of cauliflower mosaic virus is introduced into RRS and A2704-12 but not MON89788. We then designed a screening method comprised of the combination of the quantification of P35S and the event-specific quantification of MON89788. The conversion factor (Cf) required to convert the amount of a genetically modified organism (GMO) from a copy number ratio to a weight ratio was determined experimentally. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDR), respectively. The determined RSDR values for the method were less than 25% for both targets. We consider that the developed method would be suitable for the simple detection and approximate quantification of GMO.

  15. Speckle reduction in optical coherence tomography by adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun

    2015-12-01

    An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.

  16. Theoretical distribution of gutta-percha within root canals filled using cold lateral compaction based on numeric calculus.

    PubMed

    Min, Yi; Song, Ying; Gao, Yuan; Dummer, Paul M H

    2016-08-01

    This study aimed to present a new method based on numeric calculus to provide data on the theoretical volume ratio of voids when using the cold lateral compaction technique in canals with various diameters and tapers. Twenty-one simulated mathematical root canal models were created with different tapers and sizes of apical diameter, and were filled with defined sizes of standardized accessory gutta-percha cones. The areas of each master and accessory gutta-percha cone as well as the depth of their insertion into the canals were determined mathematically in Microsoft Excel. When the first accessory gutta-percha cone had been positioned, the residual area of void was measured. The areas of the residual voids were then measured repeatedly upon insertion of additional accessary cones until no more could be inserted in the canal. The volume ratio of voids was calculated through measurement of the volume of the root canal and mass of gutta-percha cones. The theoretical volume ratio of voids was influenced by the taper of canal, the size of apical preparation and the size of accessory gutta-percha cones. Greater apical preparation size and larger taper together with the use of smaller accessory cones reduced the volume ratio of voids in the apical third. The mathematical model provided a precise method to determine the theoretical volume ratio of voids in root-filled canals when using cold lateral compaction.

  17. Comparative quantification of human intestinal bacteria based on cPCR and LDR/LCR.

    PubMed

    Tang, Zhou-Rui; Li, Kai; Zhou, Yu-Xun; Xiao, Zhen-Xian; Xiao, Jun-Hua; Huang, Rui; Gu, Guo-Hao

    2012-01-21

    To establish a multiple detection method based on comparative polymerase chain reaction (cPCR) and ligase detection reaction (LDR)/ligase chain reaction (LCR) to quantify the intestinal bacterial components. Comparative quantification of 16S rDNAs from different intestinal bacterial components was used to quantify multiple intestinal bacteria. The 16S rDNAs of different bacteria were amplified simultaneously by cPCR. The LDR/LCR was examined to actualize the genotyping and quantification. Two beneficial (Bifidobacterium, Lactobacillus) and three conditionally pathogenic bacteria (Enterococcus, Enterobacterium and Eubacterium) were used in this detection. With cloned standard bacterial 16S rDNAs, standard curves were prepared to validate the quantitative relations between the ratio of original concentrations of two templates and the ratio of the fluorescence signals of their final ligation products. The internal controls were added to monitor the whole detection flow. The quantity ratio between two bacteria was tested. cPCR and LDR revealed obvious linear correlations with standard DNAs, but cPCR and LCR did not. In the sample test, the distributions of the quantity ratio between each two bacterial species were obtained. There were significant differences among these distributions in the total samples. But these distributions of quantity ratio of each two bacteria remained stable among groups divided by age or sex. The detection method in this study can be used to conduct multiple intestinal bacteria genotyping and quantification, and to monitor the human intestinal health status as well.

  18. High-throughput prediction of Acacia and eucalypt lignin syringyl/guaiacyl content using FT-Raman spectroscopy and partial least squares modeling

    DOE PAGES

    Lupoi, Jason S.; Healey, Adam; Singh, Seema; ...

    2015-01-16

    High-throughput techniques are necessary to efficiently screen potential lignocellulosic feedstocks for the production of renewable fuels, chemicals, and bio-based materials, thereby reducing experimental time and expense while supplanting tedious, destructive methods. The ratio of lignin syringyl (S) to guaiacyl (G) monomers has been routinely quantified as a way to probe biomass recalcitrance. Mid-infrared and Raman spectroscopy have been demonstrated to produce robust partial least squares models for the prediction of lignin S/G ratios in a diverse group of Acacia and eucalypt trees. The most accurate Raman model has now been used to predict the S/G ratio from 269 unknown Acaciamore » and eucalypt feedstocks. This study demonstrates the application of a partial least squares model composed of Raman spectral data and lignin S/G ratios measured using pyrolysis/molecular beam mass spectrometry (pyMBMS) for the prediction of S/G ratios in an unknown data set. The predicted S/G ratios calculated by the model were averaged according to plant species, and the means were not found to differ from the pyMBMS ratios when evaluating the mean values of each method within the 95 % confidence interval. Pairwise comparisons within each data set were employed to assess statistical differences between each biomass species. While some pairwise appraisals failed to differentiate between species, Acacias, in both data sets, clearly display significant differences in their S/G composition which distinguish them from eucalypts. In conclusion, this research shows the power of using Raman spectroscopy to supplant tedious, destructive methods for the evaluation of the lignin S/G ratio of diverse plant biomass materials.« less

  19. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-07-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalized waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalized model for extreme-mass-ratio inspirals constructed on deformed black hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  20. Investigation of the dye-sensitized solar cell designed by a series of mixed metal oxides based on ZnAl-layered double hydroxide

    NASA Astrophysics Data System (ADS)

    Zhu, Yatong; Wang, Dali; Yang, Xiaoyu; Liu, Sha; Liu, Dong; Liu, Jie; Xiao, Hongdi; Hao, Xiaotao; Liu, Jianqiang

    2017-10-01

    In this paper, the anode materials for dye-sensitized solar cell (DSSC) were prepared by a facile calcination method using the ZnAl-layered double hydroxide (LDH) as a precursor. The ZnAl-LDHs with different molar ratios (Zn:Al = 2, 4, 6, 8) were prepared by the urea method and the mixed metal oxides (MMO) were prepared by calcining the LDHs at 500 °C. A series of cells were assembled by the corresponding MMOs and different dyes (N3 and N719). The basic parameters were investigated by X-ray diffraction, scanning electron microscope, thermogravimetric and differential thermal analysis, nitrogen sorption analysis and UV-Vis absorption spectrum. The photovoltaic performance of DSSCs was measured by electrochemical method. It could be seen that ZnAl molar ratios and different dyes had great influence on the efficiency of DSSC. The efficiency improved explicitly with increasing ZnAl molar ratio and the DSSC made of N3 showed better efficiency than that of N719. The best efficiency of N3 conditions reached 0.55% when the ratio of ZnAl-LDH precursor was 8:1.

  1. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-04-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black-hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalised waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalised model for extreme-mass-ratio inspirals constructed on deformed black-hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  2. Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping

    2003-05-01

    In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.

  3. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  4. Construction of a primary DNA fingerprint database for cotton cultivars.

    PubMed

    Zhang, Y C; Kuang, M; Yang, W H; Xu, H X; Zhou, D Y; Wang, Y Q; Feng, X A; Su, C; Wang, F

    2013-06-13

    Forty core primers were used to construct a DNA fingerprint database of 132 cotton species based on multiplex fluorescence detection technology. A high first successful ratio of 99.04% was demonstrated with tetraplex polymerase chain reaction. Forty primer pairs amplified a total of 262 genotypes among 132 species, with an average of 6.55 per primer and values of polymorphism information content varying from 0.340 to 0.882. Conflicting DNA homozygous ratios were found in various species. The highest DNA homozygous ratio was found in landrace standard cultivars, which had an 81.46% DNA homozygous ratio. The lowest occurred in a group of 2010 leading cultivars with a homozygous ratio of 63.04%. Genetic diversity of the 132 species was briefly analyzed using unweighted pair-group method with arithmetic means.

  5. Proton magnetic resonance spectroscopy for assessment of human body composition.

    PubMed

    Kamba, M; Kimura, K; Koda, M; Ogawa, T

    2001-02-01

    The usefulness of magnetic resonance spectroscopy (MRS)-based techniques for assessment of human body composition has not been established. We compared a proton MRS-based technique with the total body water (TBW) method to determine the usefulness of the former technique for assessment of human body composition. Proton magnetic resonance spectra of the chest to abdomen, abdomen to pelvis, and pelvis to thigh regions were obtained from 16 volunteers by using single, free induction decay measurement with a clinical magnetic resonance system operating at 1.5 T. The MRS-derived metabolite ratio was determined as the ratio of fat methyl and methylene proton resonance to water proton resonance. The peak areas for the chest to abdomen and the pelvis to thigh regions were normalized to an external reference (approximately 2200 g benzene) and a weighted average of the MRS-derived metabolite ratios for the 2 positions was calculated. TBW for each subject was determined by the deuterium oxide dilution technique. The MRS-derived metabolite ratios were significantly correlated with the ratio of body fat to lean body mass estimated by TBW. The MRS-derived metabolite ratio for the abdomen to pelvis region correlated best with the ratio of body fat to lean body mass on simple regression analyses (r = 0.918). The MRS-derived metabolite ratio for the abdomen to pelvis region and that for the pelvis to thigh region were selected for a multivariate regression model (R = 0.947, adjusted R(2) = 0.881). This MRS-based technique is sufficiently accurate for assessment of human body composition.

  6. Discomfort Evaluation of Truck Ingress/Egress Motions Based on Biomechanical Analysis

    PubMed Central

    Choi, Nam-Chul; Lee, Sang Hun

    2015-01-01

    This paper presents a quantitative discomfort evaluation method based on biomechanical analysis results for human body movement, as well as its application to an assessment of the discomfort for truck ingress and egress. In this study, the motions of a human subject entering and exiting truck cabins with different types, numbers, and heights of footsteps were first measured using an optical motion capture system and load sensors. Next, the maximum voluntary contraction (MVC) ratios of the muscles were calculated through a biomechanical analysis of the musculoskeletal human model for the captured motion. Finally, the objective discomfort was evaluated using the proposed discomfort model based on the MVC ratios. To validate this new discomfort assessment method, human subject experiments were performed to investigate the subjective discomfort levels through a questionnaire for comparison with the objective discomfort levels. The validation results showed that the correlation between the objective and subjective discomforts was significant and could be described by a linear regression model. PMID:26067194

  7. Balanced detection for self-mixing interferometry to improve signal-to-noise ratio

    NASA Astrophysics Data System (ADS)

    Zhao, Changming; Norgia, Michele; Li, Kun

    2018-01-01

    We apply balanced detection to self-mixing interferometry for displacement and vibration measurement, using two photodiodes for implementing a differential acquisition. The method is based on the phase opposition of the self-mixing signal measured between the two laser diode facet outputs. The balanced signal obtained by enlarging the self-mixing signal, also by canceling of the common-due noises mainly due to disturbances on laser supply and transimpedance amplifier. Experimental results demonstrate the signal-to-noise ratio significantly improves, with almost twice signals enhancement and more than half noise decreasing. This method allows for more robust, longer-distance measurement systems, especially using fringe-counting.

  8. Tissue Viscoelasticity Imaging Using Vibration and Ultrasound Coupler Gel

    NASA Astrophysics Data System (ADS)

    Yamakawa, Makoto; Shiina, Tsuyoshi

    2012-07-01

    In tissue diagnosis, both elasticity and viscosity are important indexes. Therefore, we propose a method for evaluating tissue viscoelasticity by applying vibration that is usually performed in elastography and using an ultrasound coupler gel with known viscoelasticity. In this method, we use three viscoelasticity parameters based on the coupler strain and tissue strain: the strain ratio as an elasticity parameter, and the phase difference and the normalized hysteresis loop area as viscosity parameters. In the agar phantom experiment, using these viscoelasticity parameters, we were able to estimate the viscoelasticity distribution of the phantom. In particular, the strain ratio and the phase difference were robust to strain estimation error.

  9. The experimental and calculated characteristics of 22 tapered wings

    NASA Technical Reports Server (NTRS)

    Anderson, Raymond F

    1938-01-01

    The experimental and calculated aerodynamic characteristics of 22 tapered wings are compared, using tests made in the variable-density wind tunnel. The wings had aspect ratios from 6 to 12 and taper ratios from 1:6:1 and 5:1. The compared characteristics are the pitching moment, the aerodynamic-center position, the lift-curve slope, the maximum lift coefficient, and the curves of drag. The method of obtaining the calculated values is based on the use of wing theory and experimentally determined airfoil section data. In general, the experimental and calculated characteristics are in sufficiently good agreement that the method may be applied to many problems of airplane design.

  10. Intra- and inter-tooth variation in strontium isotope ratios from prehistoric seals by laser ablation (LA)-MC-ICP-MS.

    PubMed

    Glykou, A; Eriksson, G; Storå, J; Schmitt, M; Kooijman, E; Lidén, K

    2018-05-04

    Strontium isotope ratios ( 87 Sr/ 86 Sr) in modern-day marine environments are considered to be homogeneous (~0.7092). However, in the Baltic Sea, the Sr ratios are controlled by mixing seawater and continental drainage from major rivers discharging into the Baltic. This pilot study explores if variations in Sr can be detected in marine mammals from archaeological sites in the Baltic Sea. 87 Sr/ 86 Sr ratios were measured in tooth enamel from three seal species by laser ablation (LA)-MC-ICP-MS. The method enables micro-sampling of solid materials. This is the first time that the method has been applied to marine samples from archaeological collections. The analyses showed inter-tooth 87 Sr/ 86 Sr variation suggesting that different ratios can be detected in different regions of the Baltic Sea. Furthermore, the intra-tooth variation suggests possible different geographic origin or seasonal movement of seals within different regions in the Baltic Sea through their life time. The method was successfully applied to archaeological marine samples showing that: (1) the 87 Sr/ 86 Sr ivalue n marine environments is not uniform, (2) 87 Sr/ 86 Sr differences might reflect differences in ecology and life history of different seal species, and (3) archaeological mobility studies based on 87 Sr/ 86 Sr ratio in humans should therefore be evaluated together with diet reconstruction. This article is protected by copyright. All rights reserved.

  11. A robust and fast method of sampling and analysis of delta13C of dissolved inorganic carbon in ground waters.

    PubMed

    Spötl, Christoph

    2005-09-01

    The stable carbon isotopic composition of dissolved inorganic carbon (delta13C(DIC)) is traditionally determined using either direct precipitation or gas evolution methods in conjunction with offline gas preparation and measurement in a dual-inlet isotope ratio mass spectrometer. A gas evolution method based on continuous-flow technology is described here, which is easy to use and robust. Water samples (100-1500 microl depending on the carbonate alkalinity) are injected into He-filled autosampler vials in the field and analysed on an automated continuous-flow gas preparation system interfaced to an isotope ratio mass spectrometer. Sample analysis time including online preparation is 10 min and overall precision is 0.1 per thousand. This method is thus fast and can easily be automated for handling large sample batches.

  12. Precision depth measurement of through silicon vias (TSVs) on 3D semiconductor packaging process.

    PubMed

    Jin, Jonghan; Kim, Jae Wan; Kang, Chu-Shik; Kim, Jong-Ahn; Lee, Sunghun

    2012-02-27

    We have proposed and demonstrated a novel method to measure depths of through silicon vias (TSVs) at high speed. TSVs are fine and deep holes fabricated in silicon wafers for 3D semiconductors; they are used for electrical connections between vertically stacked wafers. Because the high-aspect ratio hole of the TSV makes it difficult for light to reach the bottom surface, conventional optical methods using visible lights cannot determine the depth value. By adopting an optical comb of a femtosecond pulse laser in the infra-red range as a light source, the depths of TSVs having aspect ratio of about 7 were measured. This measurement was done at high speed based on spectral resolved interferometry. The proposed method is expected to be an alternative method for depth inspection of TSVs.

  13. Improvements in Precise and Accurate Isotope Ratio Determination via LA-MC-ICP-MS by Application of an Alternative Data Reduction Protocol

    NASA Astrophysics Data System (ADS)

    Fietzke, J.; Liebetrau, V.; Guenther, D.; Frische, M.; Zumholz, K.; Hansteen, T. H.; Eisenhauer, A.

    2008-12-01

    An alternative approach for the evaluation of isotope ratio data using LA-MC-ICP-MS will be presented. In contrast to previously applied methods it is based on the simultaneous responses of all analyte isotopes of interest and the relevant interferences without performing a conventional background correction. Significant improvements in precision and accuracy can be achieved when applying this new method and will be discussed based on the results of two first methodical applications: a) radiogenic and stable Sr isotopes in carbonates b) stable chlorine isotopes of pyrohydrolytic extracts. In carbonates an external reproducibility of the 87Sr/86Sr ratios of about 19 ppm (RSD) was achieved, an improvement of about a factor of 5. For recent and sub-recent marine carbonates a mean radiogenic strontium isotope ratio 87Sr/86Sr of 0.709170±0.000007 (2SE) was determined, which agrees well with the value of 0.7091741±0.0000024 (2SE) reported for modern sea water [1,2]. Stable chlorine isotope ratios were determined ablating pyrohydrolytic extracts with a reproducibility of about 0.05‰ (RSD). For basaltic reference material JB1a and JB2 chlorine isotope ratios were determined relative to SMOC (standard mean ocean chlorinity) δ37ClJB-1a = (-0.99±0.06) ‰ and δ37ClJB-1a = (-0.60±0.03) ‰ (SD), respectively, in accordance with published data [3]. The described strategies for data reduction are considered to be generally applicable for all isotope ratio measurements using LA-MC-ICP-MS. [1] J.M. McArthur, D. Rio, F. Massari, D. Castradori, T.R. Bailey, M. Thirlwall, S. Houghton, Palaeogeo. Palaeoclim. Palaeoeco., 2006, 242 (126), doi: 10.1016/j.palaeo.2006.06.004 [2] J. Fietzke, V. Liebetrau, D. Guenther, K. Guers, K. Hametner, K. Zumholz, T.H. Hansteen and A. Eisenhauer, J. Anal. At. Spectrom., 2008, 23, 955-961, doi:10.1039/B717706B [3] J. Fietzke, M. Frische, T.H. Hansteen and A. Eisenhauer, J. Anal. At. Spectrom., 2008, 23, 769-772, doi:10.1039/B718597A

  14. A cardioid oscillator with asymmetric time ratio for establishing CPG models.

    PubMed

    Fu, Q; Wang, D H; Xu, L; Yuan, G

    2018-01-13

    Nonlinear oscillators are usually utilized by bionic scientists for establishing central pattern generator models for imitating rhythmic motions by bionic scientists. In the natural word, many rhythmic motions possess asymmetric time ratios, which means that the forward and the backward motions of an oscillating process sustain different times within one period. In order to model rhythmic motions with asymmetric time ratios, nonlinear oscillators with asymmetric forward and backward trajectories within one period should be studied. In this paper, based on the property of the invariant set, a method to design the closed curve in the phase plane of a dynamic system as its limit cycle is proposed. Utilizing the proposed method and considering that a cardioid curve is a kind of asymmetrical closed curves, a cardioid oscillator with asymmetric time ratios is proposed and realized. Through making the derivation of the closed curve in the phase plane of a dynamic system equal to zero, the closed curve is designed as its limit cycle. Utilizing the proposed limit cycle design method and according to the global invariant set theory, a cardioid oscillator applying a cardioid curve as its limit cycle is achieved. On these bases, the numerical simulations are conducted for analyzing the behaviors of the cardioid oscillator. The example utilizing the established cardioid oscillator to simulate rhythmic motions of the hip joint of a human body in the sagittal plane is presented. The results of the numerical simulations indicate that, whatever the initial condition is and without any outside input, the proposed cardioid oscillator possesses the following properties: (1) The proposed cardioid oscillator is able to generate a series of periodic and anti-interference self-exciting trajectories, (2) the generated trajectories possess an asymmetric time ratio, and (3) the time ratio can be regulated by adjusting the oscillator's parameters. Furthermore, the comparison between the simulated trajectories by the established cardioid oscillator and the measured angle trajectories of the hip angle of a human body show that the proposed cardioid oscillator is fit for imitating the rhythmic motions of the hip of a human body with asymmetric time ratios.

  15. VizieR Online Data Catalog: Bayesian method for detecting stellar flares (Pitkin+, 2014)

    NASA Astrophysics Data System (ADS)

    Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.

    2015-05-01

    We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N. (1 data file).

  16. A Bayesian method for detecting stellar flares

    NASA Astrophysics Data System (ADS)

    Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.

    2014-12-01

    We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of `quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.

  17. Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue

    2015-04-01

    Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.

  18. Biomass production on the Olympic and Kitsap Peninsulas, Washington: updated logging residue ratios, slash pile volume-to-weight ratios, and supply curves for selected locations

    Treesearch

    Jason C. Cross; Eric C. Turnblom; Gregory J. Ettl

    2013-01-01

    Biomass residue produced by timber harvest operations is estimated for the Olympic and Kitsap Peninsulas, Washington. Scattered residues were sampled in 53 harvest units and piled residues were completely enumerated in 55 harvest units. Production is based on 2008 and 2009 data and is stratified by forest location, ownership type, harvest intensity, and harvest method...

  19. Non-invasive acoustic-based monitoring of uranium in solution and H/D ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pantea, Cristian; Beedle, Christopher Craig; Sinha, Dipen N.

    The primary objective of this project is to adapt existing non-invasive acoustic techniques (Swept-Frequency Acoustic Interferometry and Gaussian-pulse acoustic technique) for the purpose of demonstrating the ability to quantify U or H/D ratios in solution. Furthermore, a successful demonstration will provide an easily implemented, low cost, and non-invasive method for remote and unattended uranium mass measurements for International Atomic Energy Agency (IAEA).

  20. Estimation of contribution ratios of pollutant sources to a specific section based on an enhanced water quality model.

    PubMed

    Cao, Bibo; Li, Chuan; Liu, Yan; Zhao, Yue; Sha, Jian; Wang, Yuqiu

    2015-05-01

    Because water quality monitoring sections or sites could reflect the water quality status of rivers, surface water quality management based on water quality monitoring sections or sites would be effective. For the purpose of improving water quality of rivers, quantifying the contribution ratios of pollutant resources to a specific section is necessary. Because physical and chemical processes of nutrient pollutants are complex in water bodies, it is difficult to quantitatively compute the contribution ratios. However, water quality models have proved to be effective tools to estimate surface water quality. In this project, an enhanced QUAL2Kw model with an added module was applied to the Xin'anjiang Watershed, to obtain water quality information along the river and to assess the contribution ratios of each pollutant source to a certain section (the Jiekou state-controlled section). Model validation indicated that the results were reliable. Then, contribution ratios were analyzed through the added module. Results show that among the pollutant sources, the Lianjiang tributary contributes the largest part of total nitrogen (50.43%), total phosphorus (45.60%), ammonia nitrogen (32.90%), nitrate (nitrite + nitrate) nitrogen (47.73%), and organic nitrogen (37.87%). Furthermore, contribution ratios in different reaches varied along the river. Compared with pollutant loads ratios of different sources in the watershed, an analysis of contribution ratios of pollutant sources for each specific section, which takes the localized chemical and physical processes into consideration, was more suitable for local-regional water quality management. In summary, this method of analyzing the contribution ratios of pollutant sources to a specific section based on the QUAL2Kw model was found to support the improvement of the local environment.

  1. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    PubMed

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Quantifying spatial and temporal variability in atmospheric ammonia with in situ and space-based observations--article

    EPA Science Inventory

    Ammonia plays an important role in many biogeochemical processes, yet atmospheric mixing ratios are not well known. Recently, methods have been developed for retrieving NH3 from space-based observations, but they have not been compared to in situ measurements. We have conducted a...

  3. Peer-driven contraceptive choices and preferences for contraceptive methods among students of tertiary educational institutions in Enugu, Nigeria.

    PubMed

    Iyoke, Ca; Ezugwu, Fo; Lawani, Ol; Ugwu, Go; Ajah, Lo; Mba, Sg

    2014-01-01

    To describe the methods preferred for contraception, evaluate preferences and adherence to modern contraceptive methods, and determine the factors associated with contraceptive choices among tertiary students in South East Nigeria. A questionnaire-based cross-sectional study of sexual habits, knowledge of contraceptive methods, and patterns of contraceptive choices among a pooled sample of unmarried students from the three largest tertiary educational institutions in Enugu city, Nigeria was done. Statistical analysis involved descriptive and inferential statistics at the 95% level of confidence. A total of 313 unmarried students were studied (194 males; 119 females). Their mean age was 22.5±5.1 years. Over 98% of males and 85% of females made their contraceptive choices based on information from peers. Preferences for contraceptive methods among female students were 49.2% for traditional methods of contraception, 28% for modern methods, 10% for nonpharmacological agents, and 8% for off-label drugs. Adherence to modern contraceptives among female students was 35%. Among male students, the preference for the male condom was 45.2% and the adherence to condom use was 21.7%. Multivariate analysis showed that receiving information from health personnel/media/workshops (odds ratio 9.54, 95% confidence interval 3.5-26.3), health science-related course of study (odds ratio 3.5, 95% confidence interval 1.3-9.6), and previous sexual exposure prior to university admission (odds ratio 3.48, 95% confidence interval 1.5-8.0) all increased the likelihood of adherence to modern contraceptive methods. An overwhelming reliance on peers for contraceptive information in the context of poor knowledge of modern methods of contraception among young people could have contributed to the low preferences and adherence to modern contraceptive methods among students in tertiary educational institutions. Programs to reduce risky sexual behavior among these students may need to focus on increasing the content and adequacy of contraceptive information held by people through regular health worker-led, on-campus workshops.

  4. Diagnostic accuracy of liver fibrosis based on red cell distribution width (RDW) to platelet ratio with fibroscan in chronic hepatitis B

    NASA Astrophysics Data System (ADS)

    Sembiring, J.; Jones, F.

    2018-03-01

    Red cell Distribution Width (RDW) and platelet ratio (RPR) can predict liver fibrosis and cirrhosis in chronic hepatitis B with relatively high accuracy. RPR was superior to other non-invasive methods to predict liver fibrosis, such as AST and ALT ratio, AST and platelet ratio Index and FIB-4. The aim of this study was to assess diagnostic accuracy liver fibrosis by using RDW and platelets ratio in chronic hepatitis B patients based on compared with Fibroscan. This cross-sectional study was conducted at Adam Malik Hospital from January-June 2015. We examine 34 patients hepatitis B chronic, screen RDW, platelet, and fibroscan. Data were statistically analyzed. The result RPR with ROC procedure has an accuracy of 72.3% (95% CI: 84.1% - 97%). In this study, the RPR had a moderate ability to predict fibrosis degree (p = 0.029 with AUC> 70%). The cutoff value RPR was 0.0591, sensitivity and spesificity were 71.4% and 60%, Positive Prediction Value (PPV) was 55.6% and Negative Predictions Value (NPV) was 75%, positive likelihood ratio was 1.79 and negative likelihood ratio was 0.48. RPR have the ability to predict the degree of liver fibrosis in chronic hepatitis B patients with moderate accuracy.

  5. Simultaneous spectrophotometric determination of indacaterol and glycopyrronium in a newly approved pharmaceutical formulation using different signal processing techniques of ratio spectra

    NASA Astrophysics Data System (ADS)

    Abdel Ghany, Maha F.; Hussein, Lobna A.; Magdy, Nancy; Yamani, Hend Z.

    2016-03-01

    Three spectrophotometric methods have been developed and validated for determination of indacaterol (IND) and glycopyrronium (GLY) in their binary mixtures and novel pharmaceutical dosage form. The proposed methods are considered to be the first methods to determine the investigated drugs simultaneously. The developed methods are based on different signal processing techniques of ratio spectra namely; Numerical Differentiation (ND), Savitsky-Golay (SG) and Fourier Transform (FT). The developed methods showed linearity over concentration range 1-30 and 10-35 (μg/mL) for IND and GLY, respectively. The accuracy calculated as percentage recoveries were in the range of 99.00%-100.49% with low value of RSD% (< 1.5%) demonstrating an excellent accuracy of the proposed methods. The developed methods were proved to be specific, sensitive and precise for quality control of the investigated drugs in their pharmaceutical dosage form without the need for any separation process.

  6. Spatial buckling analysis of current-carrying nanowires in the presence of a longitudinal magnetic field accounting for both surface and nonlocal effects

    NASA Astrophysics Data System (ADS)

    Foroutan, Shahin; Haghshenas, Amin; Hashemian, Mohammad; Eftekhari, S. Ali; Toghraie, Davood

    2018-03-01

    In this paper, three-dimensional buckling behavior of nanowires was investigated based on Eringen's Nonlocal Elasticity Theory. The electric current-carrying nanowires were affected by a longitudinal magnetic field based upon the Lorentz force. The nanowires (NWs) were modeled based on Timoshenko beam theory and the Gurtin-Murdoch's surface elasticity theory. Generalized Differential Quadrature (GDQ) method was used to solve the governing equations of the NWs. Two sets of boundary conditions namely simple-simple and clamped-clamped were applied and the obtained results were discussed. Results demonstrated the effect of electric current, magnetic field, small-scale parameter, slenderness ratio, and nanowires diameter on the critical compressive buckling load of nanowires. As a key result, increasing the small-scale parameter decreased the critical load. By the same token, increasing the electric current, magnetic field, and slenderness ratio resulted in a decrease in the critical load. As the slenderness ratio increased, the effect of nonlocal theory decreased. In contrast, by expanding the NWs diameter, the nonlocal effect increased. Moreover, in the present article, the critical values of the magnetic field of strength and slenderness ratio were revealed, and the roles of the magnetic field, slenderness ratio, and NWs diameter on higher buckling loads were discussed.

  7. Prevalence of tuberculous infection and incidence of tuberculosis; a re-assessment of the Styblo rule

    PubMed Central

    van der Werf, MJ; Borgdorff, MW

    2008-01-01

    Abstract Objective To evaluate the validity of the fixed mathematical relationship between the annual risk of tuberculous infection (ARTI), the prevalence of smear-positive tuberculosis (TB) and the incidence of smear-positive TB specified as the Styblo rule, which TB control programmes use to estimate the incidence of TB disease at a population level and the case detection rate. Methods Population-based tuberculin surveys and surveys on prevalence of smear-positive TB since 1975 were identified through a literature search. For these surveys, the ratio between the number of tuberculous infections (based on ARTI estimates) and the number of smear-positive TB cases was calculated and compared to the ratio of 8 to 12 tuberculous infections per prevalent smear-positive TB case as part of the Styblo rule. Findings Three countries had national population-based data on both ARTI and prevalence of smear-positive TB for more than one point in time. In China the ratio ranged from 3.4 to 5.8, in the Philippines from 2.6 to 4.4, and in the Republic of Korea, from 3.2 to 4.7. All ratios were markedly lower than the ratio that is part of the Styblo rule. Conclusion According to recent country data, there are typically fewer than 8 to 12 tuberculous infections per prevalent smear-positive TB case, and it remains unclear whether this ratio varies significantly among countries. The decrease in the ratio compared to the Styblo rule probably relates to improvements in the prompt treatment of TB disease (by national TB programmes). A change in the number of tuberculous infections per prevalent smear-positive TB case in population-based surveys makes the assumed fixed mathematical relationship between ARTI and incidence of smear-positive TB no longer valid. PMID:18235886

  8. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  9. Determining health-care facility catchment areas in Uganda using data on malaria-related visits

    PubMed Central

    Charland, Katia; Kigozi, Ruth; Dorsey, Grant; Kamya, Moses R; Buckeridge, David L

    2014-01-01

    Abstract Objective To illustrate the use of a new method for defining the catchment areas of health-care facilities based on their utilization. Methods The catchment areas of six health-care facilities in Uganda were determined using the cumulative case ratio: the ratio of the observed to expected utilization of a facility for a particular condition by patients from small administrative areas. The cumulative case ratio for malaria-related visits to these facilities was determined using data from the Uganda Malaria Surveillance Project. Catchment areas were also derived using various straight line and road network distances from the facility. Subsequently, the 1-year cumulative malaria case rate was calculated for each catchment area, as determined using the three methods. Findings The 1-year cumulative malaria case rate varied considerably with the method used to define the catchment areas. With the cumulative case ratio approach, the catchment area could include noncontiguous areas. With the distance approaches, the denominator increased substantially with distance, whereas the numerator increased only slightly. The largest cumulative case rate per 1000 population was for the Kamwezi facility: 234.9 (95% confidence interval, CI: 226.2–243.8) for a straight-line distance of 5 km, 193.1 (95% CI: 186.8–199.6) for the cumulative case ratio approach and 156.1 (95% CI: 150.9–161.4) for a road network distance of 5 km. Conclusion Use of the cumulative case ratio for malaria-related visits to determine health-care facility catchment areas was feasible. Moreover, this approach took into account patients’ actual addresses, whereas using distance from the facility did not. PMID:24700977

  10. Zeta Sperm Selection Improves Pregnancy Rate and Alters Sex Ratio in Male Factor Infertility Patients: A Double-Blind, Randomized Clinical Trial

    PubMed Central

    Nasr Esfahani, Mohammad Hossein; Deemeh, Mohammad Reza; Tavalaee, Marziyeh; Sekhavati, Mohammad Hadi; Gourabi, Hamid

    2016-01-01

    Background Selection of sperm for intra-cytoplasmic sperm injection (ICSI) is usually considered as the ultimate technique to alleviate male-factor infertility. In routine ICSI, selection is based on morphology and viability which does not necessarily preclude the chance injection of DNA-damaged or apoptotic sperm into the oocyte. Sperm with high negative surface electrical charge, named “Zeta potential”, are mature and more likely to have intact chromatin. In addition, X-bearing spermatozoa carry more negative charge. Therefore, we aimed to compare the clinical outcomes of Zeta procedure with routine sperm selection in infertile men candidate for ICSI. Materials and Methods From a total of 203 ICSI cycles studied, 101 cycles were allocated to density gradient centrifugation (DGC)/Zeta group and the remaining 102 were included in the DGC group in this prospective study. Clinical outcomes were com- pared between the two groups. The ratios of Xand Y bearing sperm were assessed by fluorescence in situ hybridization (FISH) and quantitative polymerase chain reaction (qPCR) methods in 17 independent semen samples. Results In the present double-blind randomized clinical trial, a significant increase in top quality embryos and pregnancy rate were observed in DGC/Zeta group compared to DGC group. Moreover, sex ratio (XY/XX) at birth significantly was lower in the DGC/Zeta group compared to DGC group despite similar ratio of X/Y bearings sper- matozoa following Zeta selection. Conclusion Zeta method not only improves the percentage of top embryo quality and pregnancy outcome but also alters the sex ratio compared to the conventional DGC method, despite no significant change in the ratio of Xand Ybearing sperm population (Registration number: IRCT201108047223N1). PMID:27441060

  11. Cost of space-based laser ballistic missile defense.

    PubMed

    Field, G; Spergel, D

    1986-03-21

    Orbiting platforms carrying infrared lasers have been proposed as weapons forming the first tier of a ballistic missile defense system under the President's Strategic Defense Initiative. As each laser platform can destroy a limited number of missiles, one of several methods of countering such a system is to increase the number of offensive missiles. Hence it is important to know whether the cost-exchange ratio, defined as the ratio of the cost to the defense of destroying a missile to the cost to the offense of deploying an additional missile, is greater or less than 1. Although the technology to be used in a ballistic missile defense system is still extremely uncertain, it is useful to examine methods for calculating the cost-exchange ratio. As an example, the cost of an orbiting infrared laser ballistic missile defense system employed against intercontinental ballistic missiles launched simultaneously from a small area is compared to the cost of additional offensive missiles. If one adopts lower limits to the costs for the defense and upper limits to the costs for the offense, the cost-exchange ratio comes out substantially greater than 1. If these estimates are confirmed, such a ballistic missile defense system would be unable to maintain its effectiveness at less cost than it would take to proliferate the ballistic missiles necessary to overcome it and would therefore not satisfy the President's requirements for an effective strategic defense. Although the method is illustrated by applying it to a space-based infrared laser system, it should be straightforward to apply it to other proposed systems.

  12. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks.

    PubMed

    Zhang, Haitao; Wu, Chenxue; Chen, Zewei; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules.

  13. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks

    PubMed Central

    Wu, Chenxue; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules. PMID:28767687

  14. Residual translation compensations in radar target narrowband imaging based on trajectory information

    NASA Astrophysics Data System (ADS)

    Yue, Wenjue; Peng, Bo; Wei, Xizhang; Li, Xiang; Liao, Dongping

    2018-05-01

    High velocity translation will result in defocusing scattering centers in radar imaging. In this paper, we propose a Residual Translation Compensations (RTC) method based on target trajectory information to eliminate the translation effects in radar imaging. Translation could not be simply regarded as a uniformly accelerated motion in reality. So the prior knowledge of the target trajectory is introduced to enhance compensation precision. First we use the two-body orbit model to figure out the radial distance. Then, stepwise compensations are applied to eliminate residual propagation delay based on conjugate multiplication method. Finally, tomography is used to confirm the validity of the method. Compare with translation parameters estimation method based on the spectral peak of the conjugate multiplied signal, RTC method in this paper enjoys a better tomography result. When the Signal Noise Ratio (SNR) of the radar echo signal is 4dB, the scattering centers can also be extracted clearly.

  15. Development of Certified Matrix-Based Reference Material as a Calibrator for Genetically Modified Rice G6H1 Analysis.

    PubMed

    Yang, Yu; Li, Liang; Yang, Hui; Li, Xiaying; Zhang, Xiujie; Xu, Junfeng; Zhang, Dabing; Jin, Wujun; Yang, Litao

    2018-04-11

    The accurate monitoring and quantification of genetically modified organisms (GMOs) are key points for the implementation of labeling regulations, and a certified reference material (CRM) acts as the scaleplate for quantifying the GM contents of foods/feeds and evaluating a GMO analytical method or equipment. Herein we developed a series of CRMs for transgenic rice event G6H1, which possesses insect-resistant and herbicide-tolerant traits. Three G6H1 CRMs were produced by mixing seed powders obtained from homozygous G6H1 and its recipient cultivar Xiushui 110 at mass ratios of 49.825%, 9.967%, and 4.986%. The between-bottle homogeneity and within-bottle homogeneity were thoroughly evaluated with consistent results. The potential DNA degradation in transportation and shelf life were evaluated with an expiration period of at least 12 months. The property values of three CRMs (G6H1 a , G6H1 b , G6H1 c ) were given as (49.825 ± 0.448) g/kg, (9.967 ± 1.757) g/kg, and (4.986 ± 1.274 g/kg based on mass fraction ratio, respectively. Furthermore, the three CRMs were characterized with values of (5.01 ± 0.08)%, (1.06 ± 0.22)%, and (0.53 ± 0.11)% based on the copy number ratio using the droplet digital PCR method. All results confirmed that the produced G6H1 matrix-based CRMs are of high quality with precise characterization values and can be used as calibrators in GM rice G6H1 inspection and monitoring and in evaluating new analytical methods or devices targeting the G6H1 event.

  16. Microphysical properties and ice particle morphology of cirrus clouds inferred from combined CALIOP-IIR measurements

    NASA Astrophysics Data System (ADS)

    Saito, M.; Iwabuchi, H.; Yang, P.; Tang, G.; King, M. D.; Sekiguchi, M.

    2016-12-01

    Cirrus clouds cover about 25% of the globe. Knowledge about the optical and microphysical properties of these clouds [particularly, optical thickness (COT) and effective radius (CER)] is essential to radiative forcing assessment. Previous studies of those properties using satellite remote sensing techniques based on observations by passive and active sensors gave inconsistent retrievals. In particular, COTs from the Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) using the unconstrained method are affected by variable particle morphology, especially the fraction of horizontally oriented plate particles (HPLT), because the method assumes the lidar ratio to be constant, which should have different values for different ice particle shapes. More realistic ice particle morphology improves estimates of the optical and microphysical properties. In this study, we develop an optimal estimation-based algorithm to infer cirrus COT and CER in addition to morphological parameters (e.g., Fraction of HPLT) using the observations made by CALIOP and the Infrared Imaging Radiometer (IIR) on the CALIPSO platform. The assumed ice particle model is a mixture of a few habits with variable HPLT. Ice particle single-scattering properties are computed using state-of-the-art light-scattering computational capabilities. Rigorous estimation of uncertainties associated with surface properties, atmospheric gases and cloud heterogeneity is performed. The results based on the present method show that COTs are quite consistent with the MODIS and CALIOP counterparts, and CERs essentially agree with the IIR operational retrievals. The lidar ratio is calculated from the bulk optical properties based on the inferred parameters. The presentation will focus on latitudinal variations of particle morphology and the lidar ratio on a global scale.

  17. What does an MRI scan cost?

    PubMed

    Young, David W

    2015-11-01

    Historically, hospital departments have computed the costs of individual tests or procedures using the ratio of cost to charges (RCC) method, which can produce inaccurate results. To determine a more accurate cost of a test or procedure, the activity-based costing (ABC) method must be used. Accurate cost calculations will ensure reliable information about the profitability of a hospital's DRGs.

  18. Photoacoustic tomography from weak and noisy signals by using a pulse decomposition algorithm in the time-domain.

    PubMed

    Liu, Liangbing; Tao, Chao; Liu, XiaoJun; Deng, Mingxi; Wang, Senhua; Liu, Jun

    2015-10-19

    Photoacoustic tomography is a promising and rapidly developed methodology of biomedical imaging. It confronts an increasing urgent problem to reconstruct the image from weak and noisy photoacoustic signals, owing to its high benefit in extending the imaging depth and decreasing the dose of laser exposure. Based on the time-domain characteristics of photoacoustic signals, a pulse decomposition algorithm is proposed to reconstruct a photoacoustic image from signals with low signal-to-noise ratio. In this method, a photoacoustic signal is decomposed as the weighted summation of a set of pulses in the time-domain. Images are reconstructed from the weight factors, which are directly related to the optical absorption coefficient. Both simulation and experiment are conducted to test the performance of the method. Numerical simulations show that when the signal-to-noise ratio is -4 dB, the proposed method decreases the reconstruction error to about 17%, in comparison with the conventional back-projection method. Moreover, it can produce acceptable images even when the signal-to-noise ratio is decreased to -10 dB. Experiments show that, when the laser influence level is low, the proposed method achieves a relatively clean image of a hair phantom with some well preserved pattern details. The proposed method demonstrates imaging potential of photoacoustic tomography in expanding applications.

  19. A comparative study of smart spectrophotometric methods for simultaneous determination of sitagliptin phosphate and metformin hydrochloride in their binary mixture.

    PubMed

    Lotfy, Hayam M; Mohamed, Dalia; Mowaka, Shereen

    2015-01-01

    Simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the oral antidiabetic drugs; sitagliptin phosphate (STG) and metformin hydrochloride (MET) in combined pharmaceutical formulations. Three methods were manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and a novel approach of induced amplitude modulation (IAM) methods. The first two methods were used for determination of STG, while MET was directly determined by measuring its absorbance at λmax 232 nm. However, (IAM) was used for the simultaneous determination of both drugs. Moreover, another three methods were developed based on derivative spectroscopy followed by mathematical manipulation steps namely; amplitude factor (P-factor), amplitude subtraction (AS) and modified amplitude subtraction (MAS). In addition, in this work the novel sample enrichment technique named spectrum addition was adopted. The proposed spectrophotometric methods did not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined pharmaceutical formulations. Standard deviation values were less than 1.5 in the assay of raw materials and tablets. The obtained results were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there was no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Chemical characterization of the acid alteration of diesel fuel: Non-targeted analysis by two-dimensional gas chromatography coupled with time-of-flight mass spectrometry with tile-based Fisher ratio and combinatorial threshold determination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Brendon A.; Pinkerton, David K.; Wright, Bob W.

    The illicit chemical alteration of petroleum fuels is of scientific interest, particularly to regulatory agencies which set fuel specifications, or excises based on those specifications. One type of alteration is the reaction of diesel fuel with concentrated sulfuric acid. Such reactions are known to subtly alter the chemical composition of the fuel, particularly the aromatic species native to the fuel. Comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC–TOFMS) is ideally suited for the analysis of diesel fuel, but may provide the analyst with an overwhelming amount of data, particularly in sample-class comparison experiments comprised of manymore » samples. The tile-based Fisher-ratio (F-ratio) method reduces the abundance of data in a GC × GC–TOFMS experiment to only the peaks which significantly distinguish the unaltered and acid altered sample classes. Three samples of diesel fuel from different filling stations were each altered to discover chemical features, i.e., analyte peaks, which were consistently changed by the acid reaction. Using different fuels prioritizes the discovery of features which are likely to be robust to the variation present between fuel samples and which will consequently be useful in determining whether an unknown sample has been acid altered. The subsequent analysis confirmed that aromatic species are removed by the acid alteration, with the degree of removal consistent with predicted reactivity toward electrophilic aromatic sulfonation. Additionally, we observed that alkenes and alkynes were also removed from the fuel, and that sulfur dioxide or compounds that degrade to sulfur dioxide are generated by the acid alteration. In addition to applying the previously reported tile-based F-ratio method, this report also expands null distribution analysis to algorithmically determine an F-ratio threshold to confidently select only the features which are sufficiently class-distinguishing. When applied to the acid alteration of diesel fuel, the suggested per-hit F-ratio threshold was 12.4, which is predicted to maintain the false discovery rate (FDR) below 0.1%. Using this F-ratio threshold, 107 of the 3362 preliminary hits were deemed significantly changing due to the acid alteration, with the number of false positives estimated to be about 3.« less

  1. Medical Waste Disposal Method Selection Based on a Hierarchical Decision Model with Intuitionistic Fuzzy Relations

    PubMed Central

    Qian, Wuyong; Wang, Zhou-Jing; Li, Kevin W.

    2016-01-01

    Although medical waste usually accounts for a small fraction of urban municipal waste, its proper disposal has been a challenging issue as it often contains infectious, radioactive, or hazardous waste. This article proposes a two-level hierarchical multicriteria decision model to address medical waste disposal method selection (MWDMS), where disposal methods are assessed against different criteria as intuitionistic fuzzy preference relations and criteria weights are furnished as real values. This paper first introduces new operations for a special class of intuitionistic fuzzy values, whose membership and non-membership information is cross ratio based ]0, 1[-values. New score and accuracy functions are defined in order to develop a comparison approach for ]0, 1[-valued intuitionistic fuzzy numbers. A weighted geometric operator is then put forward to aggregate a collection of ]0, 1[-valued intuitionistic fuzzy values. Similar to Saaty’s 1–9 scale, this paper proposes a cross-ratio-based bipolar 0.1–0.9 scale to characterize pairwise comparison results. Subsequently, a two-level hierarchical structure is formulated to handle multicriteria decision problems with intuitionistic preference relations. Finally, the proposed decision framework is applied to MWDMS to illustrate its feasibility and effectiveness. PMID:27618082

  2. Medical Waste Disposal Method Selection Based on a Hierarchical Decision Model with Intuitionistic Fuzzy Relations.

    PubMed

    Qian, Wuyong; Wang, Zhou-Jing; Li, Kevin W

    2016-09-09

    Although medical waste usually accounts for a small fraction of urban municipal waste, its proper disposal has been a challenging issue as it often contains infectious, radioactive, or hazardous waste. This article proposes a two-level hierarchical multicriteria decision model to address medical waste disposal method selection (MWDMS), where disposal methods are assessed against different criteria as intuitionistic fuzzy preference relations and criteria weights are furnished as real values. This paper first introduces new operations for a special class of intuitionistic fuzzy values, whose membership and non-membership information is cross ratio based ]0, 1[-values. New score and accuracy functions are defined in order to develop a comparison approach for ]0, 1[-valued intuitionistic fuzzy numbers. A weighted geometric operator is then put forward to aggregate a collection of ]0, 1[-valued intuitionistic fuzzy values. Similar to Saaty's 1-9 scale, this paper proposes a cross-ratio-based bipolar 0.1-0.9 scale to characterize pairwise comparison results. Subsequently, a two-level hierarchical structure is formulated to handle multicriteria decision problems with intuitionistic preference relations. Finally, the proposed decision framework is applied to MWDMS to illustrate its feasibility and effectiveness.

  3. Automatic evaluation of skin histopathological images for melanocytic features

    NASA Astrophysics Data System (ADS)

    Koosha, Mohaddeseh; Hoseini Alinodehi, S. Pourya; Nicolescu, Mircea; Safaei Naraghi, Zahra

    2017-03-01

    Successfully detecting melanocyte cells in the skin epidermis has great significance in skin histopathology. Because of the existence of cells with similar appearance to melanocytes in hematoxylin and eosin (HE) images of the epidermis, detecting melanocytes becomes a challenging task. This paper proposes a novel technique for the detection of melanocytes in HE images of the epidermis, based on the melanocyte color features, in the HSI color domain. Initially, an effective soft morphological filter is applied to the HE images in the HSI color domain to remove noise. Then a novel threshold-based technique is applied to distinguish the candidate melanocytes' nuclei. Similarly, the method is applied to find the candidate surrounding halos of the melanocytes. The candidate nuclei are associated with their surrounding halos using the suggested logical and statistical inferences. Finally, a fuzzy inference system is proposed, based on the HSI color information of a typical melanocyte in the epidermis, to calculate the similarity ratio of each candidate cell to a melanocyte. As our review on the literature shows, this is the first method evaluating epidermis cells for melanocyte similarity ratio. Experimental results on various images with different zooming factors show that the proposed method improves the results of previous works.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumway, R.H.; McQuarrie, A.D.

    Robust statistical approaches to the problem of discriminating between regional earthquakes and explosions are developed. We compare linear discriminant analysis using descriptive features like amplitude and spectral ratios with signal discrimination techniques using the original signal waveforms and spectral approximations to the log likelihood function. Robust information theoretic techniques are proposed and all methods are applied to 8 earthquakes and 8 mining explosions in Scandinavia and to an event from Novaya Zemlya of unknown origin. It is noted that signal discrimination approaches based on discrimination information and Renyi entropy perform better in the test sample than conventional methods based onmore » spectral ratios involving the P and S phases. Two techniques for identifying the ripple-firing pattern for typical mining explosions are proposed and shown to work well on simulated data and on several Scandinavian earthquakes and explosions. We use both cepstral analysis in the frequency domain and a time domain method based on the autocorrelation and partial autocorrelation functions. The proposed approach strips off underlying smooth spectral and seasonal spectral components corresponding to the echo pattern induced by two simple ripple-fired models. For two mining explosions, a pattern is identified whereas for two earthquakes, no pattern is evident.« less

  5. Multiplication free neural network for cancer stem cell detection in H-and-E stained liver images

    NASA Astrophysics Data System (ADS)

    Badawi, Diaa; Akhan, Ece; Mallah, Ma'en; Üner, Ayşegül; ćetin-Atalay, Rengül; ćetin, A. Enis

    2017-05-01

    Markers such as CD13 and CD133 have been used to identify Cancer Stem Cells (CSC) in various tissue images. It is highly likely that CSC nuclei appear as brown in CD13 stained liver tissue images. We observe that there is a high correlation between the ratio of brown to blue colored nuclei in CD13 images and the ratio between the dark blue to blue colored nuclei in H&E stained liver images. Therefore, we recommend that a pathologist observing many dark blue nuclei in an H&E stained tissue image may also order CD13 staining to estimate the CSC ratio. In this paper, we describe a computer vision method based on a neural network estimating the ratio of dark blue to blue colored nuclei in an H&E stained liver tissue image. The neural network structure is based on a multiplication free operator using only additions and sign operations. Experimental results are presented.

  6. Development of method for experimental determination of wheel-rail contact forces and contact point position by using instrumented wheelset

    NASA Astrophysics Data System (ADS)

    Bižić, Milan B.; Petrović, Dragan Z.; Tomić, Miloš C.; Djinović, Zoran V.

    2017-07-01

    This paper presents the development of a unique method for experimental determination of wheel-rail contact forces and contact point position by using the instrumented wheelset (IWS). Solutions of key problems in the development of IWS are proposed, such as the determination of optimal locations, layout, number and way of connecting strain gauges as well as the development of an inverse identification algorithm (IIA). The base for the solution of these problems is the wheel model and results of FEM calculations, while IIA is based on the method of blind source separation using independent component analysis. In the first phase, the developed method was tested on a wheel model and a high accuracy was obtained (deviations of parameters obtained with IIA and really applied parameters in the model are less than 2%). In the second phase, experimental tests on the real object or IWS were carried out. The signal-to-noise ratio was identified as the main influential parameter on the measurement accuracy. Тhе obtained results have shown that the developed method enables measurement of vertical and lateral wheel-rail contact forces Q and Y and their ratio Y/Q with estimated errors of less than 10%, while the estimated measurement error of contact point position is less than 15%. At flange contact and higher values of ratio Y/Q or Y force, the measurement errors are reduced, which is extremely important for the reliability and quality of experimental tests of safety against derailment of railway vehicles according to the standards UIC 518 and EN 14363. The obtained results have shown that the proposed method can be successfully applied in solving the problem of high accuracy measurement of wheel-rail contact forces and contact point position using IWS.

  7. Optimizing parameter of particle damping based on Leidenfrost effect of particle flows

    NASA Astrophysics Data System (ADS)

    Lei, Xiaofei; Wu, Chengjun; Chen, Peng

    2018-05-01

    Particle damping (PD) has strongly nonlinearity. With sufficiently vigorous vibration conditions, it always plays excellent damping performance and the particles which are filled into cavity are on Leidenfrost state considered in particle flow theory. For investigating the interesting phenomenon, the damping effect of PD on this state is discussed by the developed numerical model which is established based on principle of gas and solid. Furtherly, the numerical model is reformed and applied to study the relationship of Leidenfrost velocity with characteristic parameters of PD such as particle density, diameter, mass packing ratio and diameter-length ratio. The results indicate that particle density and mass packing ratio can drastically improve the damping performance as opposed as particle diameter and diameter-length ratio, mass packing ratio and diameter-length ratio can low the excited intensity for Leidenfrost state. For discussing the application of the phenomenon in engineering, bound optimization by quadratic approximation (BOBYQA) method is employed to optimize mass packing ratio of PD for minimize maximum amplitude (MMA) and minimize total vibration level (MTVL). It is noted that the particle damping can drastically reduce the vibrating amplitude for MMA as Leidenfrost velocity equal to the vibrating velocity relative to maximum vibration amplitude. For MTVL, larger mass packing ratio is best option because particles at relatively wide frequency range is adjacent to Leidenfrost state.

  8. Self-homodyne free-space optical communication system based on orthogonally polarized binary phase shift keying.

    PubMed

    Cai, Guangyu; Sun, Jianfeng; Li, Guangyuan; Zhang, Guo; Xu, Mengmeng; Zhang, Bo; Yue, Chaolei; Liu, Liren

    2016-06-10

    A self-homodyne laser communication system based on orthogonally polarized binary phase shift keying is demonstrated. The working principles of this method and the structure of a transceiver are described using theoretical calculations. Moreover, the signal-to-noise ratio, sensitivity, and bit error rate are analyzed for the amplifier-noise-limited case. The reported experiment validates the feasibility of the proposed method and demonstrates its advantageous sensitivity as a self-homodyne communication system.

  9. Modeling of viscoelastic properties of nonpermeable porous rocks saturated with highly viscous fluid at seismic frequencies at the core scale

    NASA Astrophysics Data System (ADS)

    Wang, Zizhen; Schmitt, Douglas R.; Wang, Ruihe

    2017-08-01

    A core scale modeling method for viscoelastic properties of rocks saturated with viscous fluid at low frequencies is developed based on the stress-strain method. The elastic moduli dispersion of viscous fluid is described by the Maxwell's spring-dash pot model. Based on this modeling method, we numerically test the effects of frequency, fluid viscosity, porosity, pore size, and pore aspect ratio on the storage moduli and the stress-strain phase lag of saturated rocks. And we also compared the modeling results to the Hashin-Shtrikman bounds and the coherent potential approximation (CPA). The dynamic moduli calculated from the modeling are lower than the predictions of CPA, and both of these fall between the Hashin-Shtrikman bounds. The modeling results indicate that the frequency and the fluid viscosity have similar effects on the dynamic moduli dispersion of fully saturated rocks. We observed the Debye peak in the phase lag variation with the change of frequency and viscosity. The pore structure parameters, such as porosity, pore size, and aspect ratio affect the rock frame stiffness and result in different viscoelastic behaviors of the saturated rocks. The stress-strain phase lags are larger with smaller stiffness contrasts between the rock frame and the pore fluid. The viscoelastic properties of saturated rocks are more sensitive to aspect ratio compared to other pore structure parameters. The results suggest that significant seismic dispersion (at about 50-200 Hz) might be expected for both compressional and shear waves passing through rocks saturated with highly viscous fluids.Plain Language SummaryWe develop a core scale modeling method to simulate the viscoelastic properties of rocks saturated with viscous fluid at low frequencies based on the stress-strain method. The elastic moduli dispersion of viscous fluid is described by the Maxwell's spring-dash pot model. By using this modeling method, we numerically test the effects of frequency, fluid viscosity, porosity, pore size, and pore aspect ratio on the composite's viscoelastic properties. The modeling results indicate that the frequency and the fluid viscosity have similar effects on the dynamic moduli dispersion of fully saturated rocks. We observed the Debye peak in the phase lag variation with the change of frequency and viscosity. The pore structure parameters, such as porosity, pore size, and pore aspect ratio affect the rock frame stiffness and result in different viscoelastic behavior of the saturated rocks. The lower the rock frame stiffness, the larger the stress-strain phase lags. The viscoelastic properties of saturated rocks are more sensitive to the pore aspect ratio. The results suggest that significant seismic dispersion might be expected for both compressional and shear waves passing through rocks saturated with highly viscous fluids. This will be important in the context of heavy hydrocarbon reservoirs and igneous rocks saturated with silicate melt.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012RuPhJ..55..258G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012RuPhJ..55..258G"><span>Ellipticity angle of electromagnetic signals and its use for non-energetic detection optimal by the Neumann-Pearson criterion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.</p> <p>2012-08-01</p> <p>An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22572236-we-ef-ultrasound-strain-measurement-radiation-induced-toxicity-phantom-ex-vivo-experiments','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22572236-we-ef-ultrasound-strain-measurement-radiation-induced-toxicity-phantom-ex-vivo-experiments"><span>WE-EF-210-06: Ultrasound 2D Strain Measurement of Radiation-Induced Toxicity: Phantom and Ex Vivo Experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Liu, T; Torres, M; Rossi, P</p> <p></p> <p>Purpose: Radiation-induced fibrosis is a common long-term complication affecting many patients following cancer radiotherapy. Standard clinical assessment of subcutaneous fibrosis is subjective and often limited to visual inspection and palpation. Ultrasound strain imaging describes the compressibility (elasticity) of biological tissues. This study’s purpose is to develop a quantitative ultrasound strain imaging that can consistently and accurately characterize radiation-induce fibrosis. Methods: In this study, we propose a 2D strain imaging method based on deformable image registration. A combined affine and B-spline transformation model is used to calculate the displacement of tissue between pre-stress and post-stress B-mode image sequences. The 2D displacementmore » is estimated through a hybrid image similarity measure metric, which is a combination of the normalized mutual information (NMI) and normalized sum-of-squared-differences (NSSD). And 2D strain is obtained from the gradient of the local displacement. We conducted phantom experiments under various compressions and compared the performance of our proposed method with the standard cross-correlation (CC)- based method using the signal-to-noise (SNR) and contrast-to-noise (CNS) ratios. In addition, we conducted ex-vivo beef muscle experiment to further validate the proposed method. Results: For phantom study, the SNR and CNS values of the proposed method were significantly higher than those calculated from the CC-based method under different strains. The SNR and CNR increased by a factor of 1.9 and 2.7 comparing to the CC-based method. For the ex-vivo experiment, the CC-based method failed to work due to large deformation (6.7%), while our proposed method could accurately detect the stiffness change. Conclusion: We have developed a 2D strain imaging technique based on the deformable image registration, validated its accuracy and feasibility with phantom and ex-vivo data. This 2D ultrasound strain imaging technology may be valuable as physicians try to eliminate radiation-induce fibrosis and improve the therapeutic ratio of cancer radiotherapy. This research is supported in part by DOD PCRP Award W81XWH-13-1-0269, and National Cancer Institute (NCI) Grant CA114313.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160011719','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160011719"><span>Towards a Viscous Wall Model for Immersed Boundary Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.</p> <p>2016-01-01</p> <p>Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.S13A4428K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.S13A4428K"><span>Study of Site Response in the Seattle and Tacoma Basins, Washington, Using Spectral Ratio Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Keshvardoost, R.; Wolf, L. W.</p> <p>2014-12-01</p> <p>Sedimentary basins are known to have a pronounced influence on earthquake-generated ground motions, affecting both predominant frequencies and wave amplification. These site characteristics are important elements in estimating ground shaking and seismic hazard. In this study, we use three-component broadband and strong motion seismic data from three recent earthquakes to determine site response characteristics in the Seattle and Tacoma basins, Washington. Resonant frequencies and relative amplification of ground motions were determined using Fourier spectral ratios of velocity and acceleration records from the 2012 Mw 6.1 Vancouver Island earthquake, the 2012 Mw 7.8 Queen Charlotte Island earthquake, and the 2014 Mw 6.6 Vancouver Island earthquake. Recordings from sites within and adjacent to the Seattle and Tacoma basins were selected for the study based on their signal to noise ratios. Both the Standard Spectral Ratio (SSR) and the Horizontal-to-Vertical Spectral Ratio (HVSR) methods were used in the analysis, and results from each were compared to examine their agreement and their relation to local geology. Although 57% of the sites (27 out of 48) exhibited consistent results between the two methods, other sites varied considerably. In addition, we use data from the Seattle Liquefaction Array (SLA) to evaluate the site response at 4 different depths. Results indicate that resonant frequencies remain the same at different depths but amplification decreases significantly over the top 50 m.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28005862','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28005862"><span>Accurate thermometry based on the red and green fluorescence intensity ratio in NaYF<sub>4</sub>: Yb, Er nanocrystals for bioapplication.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Lixin; Qin, Feng; Lv, Tianquan; Zhang, Zhiguo; Cao, Wenwu</p> <p>2016-10-15</p> <p>A biological temperature measurement method based on the fluorescence intensity ratio (FIR) was developed to reduce uncertainty. The upconversion luminescence of NaYF<sub>4</sub>:Yb, Er nanocrystals was studied as a function of temperature around the physiologically relevant range of 300-330 K. We found that the green-green FIR Fe and red-green FIR (I<sub>660</sub>/I<sub>540</sub>) varied linearly as temperature increased. The thermometric uncertainties using the two FIRs were discussed and were determined to be almost constant at 0.6 and 0.09 K for green-green and red-green, respectively. The lower thermometric uncertainty comes from the intense signal-to-noise ratio of the measured FIRs owing to their comparable fluorescence intensities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140008874','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140008874"><span>Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Carpenter, James R.; Markley, F Landis</p> <p>2014-01-01</p> <p>This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29603587','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29603587"><span>Influence of the quality of intraoperative fluoroscopic images on the spatial positioning accuracy of a CAOS system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Junqiang; Wang, Yu; Zhu, Gang; Chen, Xiangqian; Zhao, Xiangrui; Qiao, Huiting; Fan, Yubo</p> <p>2018-06-01</p> <p>Spatial positioning accuracy is a key issue in a computer-assisted orthopaedic surgery (CAOS) system. Since intraoperative fluoroscopic images are one of the most important input data to the CAOS system, the quality of these images should have a significant influence on the accuracy of the CAOS system. But the regularities and mechanism of the influence of the quality of intraoperative images on the accuracy of a CAOS system have yet to be studied. Two typical spatial positioning methods - a C-arm calibration-based method and a bi-planar positioning method - are used to study the influence of different image quality parameters, such as resolution, distortion, contrast and signal-to-noise ratio, on positioning accuracy. The error propagation rules of image error in different spatial positioning methods are analyzed by the Monte Carlo method. Correlation analysis showed that resolution and distortion had a significant influence on spatial positioning accuracy. In addition the C-arm calibration-based method was more sensitive to image distortion, while the bi-planar positioning method was more susceptible to image resolution. The image contrast and signal-to-noise ratio have no significant influence on the spatial positioning accuracy. The result of Monte Carlo analysis proved that generally the bi-planar positioning method was more sensitive to image quality than the C-arm calibration-based method. The quality of intraoperative fluoroscopic images is a key issue in the spatial positioning accuracy of a CAOS system. Although the 2 typical positioning methods have very similar mathematical principles, they showed different sensitivities to different image quality parameters. The result of this research may help to create a realistic standard for intraoperative fluoroscopic images for CAOS systems. Copyright © 2018 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26736561','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26736561"><span>Supervised segmentation of microelectrode recording artifacts using power spectral density.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bakstein, Eduard; Schneider, Jakub; Sieger, Tomas; Novak, Daniel; Wild, Jiri; Jech, Robert</p> <p>2015-08-01</p> <p>Appropriate detection of clean signal segments in extracellular microelectrode recordings (MER) is vital for maintaining high signal-to-noise ratio in MER studies. Existing alternatives to manual signal inspection are based on unsupervised change-point detection. We present a method of supervised MER artifact classification, based on power spectral density (PSD) and evaluate its performance on a database of 95 labelled MER signals. The proposed method yielded test-set accuracy of 90%, which was close to the accuracy of annotation (94%). The unsupervised methods achieved accuracy of about 77% on both training and testing data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4264736','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4264736"><span>Characterization of X Chromosome Inactivation Using Integrated Analysis of Whole-Exome and mRNA Sequencing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Szelinger, Szabolcs; Malenica, Ivana; Corneveaux, Jason J.; Siniard, Ashley L.; Kurdoglu, Ahmet A.; Ramsey, Keri M.; Schrauwen, Isabelle; Trent, Jeffrey M.; Narayanan, Vinodh; Huentelman, Matthew J.; Craig, David W.</p> <p>2014-01-01</p> <p>In females, X chromosome inactivation (XCI) is an epigenetic, gene dosage compensatory mechanism by inactivation of one copy of X in cells. Random XCI of one of the parental chromosomes results in an approximately equal proportion of cells expressing alleles from either the maternally or paternally inherited active X, and is defined by the XCI ratio. Skewed XCI ratio is suggestive of non-random inactivation, which can play an important role in X-linked genetic conditions. Current methods rely on indirect, semi-quantitative DNA methylation-based assay to estimate XCI ratio. Here we report a direct approach to estimate XCI ratio by integrated, family-trio based whole-exome and mRNA sequencing using phase-by-transmission of alleles coupled with allele-specific expression analysis. We applied this method to in silico data and to a clinical patient with mild cognitive impairment but no clear diagnosis or understanding molecular mechanism underlying the phenotype. Simulation showed that phased and unphased heterozygous allele expression can be used to estimate XCI ratio. Segregation analysis of the patient's exome uncovered a de novo, interstitial, 1.7 Mb deletion on Xp22.31 that originated on the paternally inherited X and previously been associated with heterogeneous, neurological phenotype. Phased, allelic expression data suggested an 83∶20 moderately skewed XCI that favored the expression of the maternally inherited, cytogenetically normal X and suggested that the deleterious affect of the de novo event on the paternal copy may be offset by skewed XCI that favors expression of the wild-type X. This study shows the utility of integrated sequencing approach in XCI ratio estimation. PMID:25503791</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20180001183','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20180001183"><span>Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.</p> <p>2011-01-01</p> <p>Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22679288-th-cd-bra-direct-measurement-magnetic-field-correction-factors-kqb-application-future-codes-practice-reference-dosimetry','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22679288-th-cd-bra-direct-measurement-magnetic-field-correction-factors-kqb-application-future-codes-practice-reference-dosimetry"><span>TH-CD-BRA-03: Direct Measurement of Magnetic Field Correction Factors, KQB, for Application in Future Codes of Practice for Reference Dosimetry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wolthaus, J; Asselen, B van; Woodings, S</p> <p>2016-06-15</p> <p>Purpose: With an MR-linac, radiation is delivered in the presence of a magnetic field. Modifications in the codes of practice (CoPs) for reference dosimetry are required to incorporate the effect of the magnetic field. Methods: In most CoPs the absorbed dose is determined using the well-known kQ formalism as the product of the calibration coefficient, the corrected electrometer reading and kQ, to account for the difference in beam quality. To keep a similar formalism a single correction factor is introduced which replaces kQ, and which corrects for beam quality and B-field, kQ,B. In this study we propose a method tomore » determine kQ,B under reference conditions in the MRLinac without using a primary standard, as the product of:- the ratio between detector readings without and with B-field (kB),- the ratio between doses in the point of measurement with and without B-field (rho),- kQ in the absence of the B-field in the MRLinac beam (kQmrl0,Q0),The ratio of the readings, which covers the change in detector reading due to the different electron trajectories in the detector, was measured with a waterproof ionization chamber (IBA-FC65g) in a water phantom in the MRLinac without and with B-field. The change in dose-to-water in the point of measurement due to the B-field was determined with a Monte Carlo based TPS. Results: For the presented approach, the measured ratio of readings is 0.956, the calculated ratio of doses in the point of measurement is 0.995. Based on TPR20,10 measurements kQ was calculated as 0.989 using NCS-18. This yields a value of 0.9408 for kQ,B. Conclusion: The presented approach to determine kQ,B agrees with a method based on primary standards within 0.4% with an uncertainty of 1% (1 std.uncert). It differs from a similar approach using a PMMA-phantom and an NE2571 chamber with 1.3%.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018APhy...64...70T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018APhy...64...70T"><span>Influence of Embedded Inhomogeneities on the Spectral Ratio of the Horizontal Components of a Random Field of Rayleigh Waves</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tsukanov, A. A.; Gorbatnikov, A. V.</p> <p>2018-01-01</p> <p>Study of the statistical parameters of the Earth's random microseismic field makes it possible to obtain estimates of the properties and structure of the Earth's crust and upper mantle. Different approaches are used to observe and process the microseismic records, which are divided into several groups of passive seismology methods. Among them are the well-known methods of surface-wave tomography, the spectral H/ V ratio of the components in the surface wave, and microseismic sounding, currently under development, which uses the spectral ratio V/ V 0 of the vertical components between pairs of spatially separated stations. In the course of previous experiments, it became clear that these ratios are stable statistical parameters of the random field that do not depend on the properties of microseism sources. This paper proposes to expand the mentioned approach and study the possibilities for using the ratio of the horizontal components H 1/ H 2 of the microseismic field. Numerical simulation was used to study the influence of an embedded velocity inhomogeneity on the spectral ratio of the horizontal components of the random field of fundamental Rayleigh modes, based on the concept that the Earth's microseismic field is represented by these waves in a significant part of the frequency spectrum.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A21M..06P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A21M..06P"><span>Cloud and Aerosol 1064nm Lidar Ratio Retrievals from the CATS Instrument</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pauly, R.; Yorks, J. E.; McGill, M. J.; Hlavka, D. L.; Midzak, N.</p> <p>2017-12-01</p> <p>The extinction to backscatter ratio or lidar ratio is an essential value in order to derive the optical properties of cloud and aerosol layers from standard elastic backscatter lidar data. For these instruments, the lidar ratio can sometimes be determined from lidar data utilizing the transmission loss or "constrained" technique. The best situations for deploying this technique involve clearly defined layers with clear sky underneath for 1-3 km. In situations where the lidar ratio cannot be calculated, look-up tables exist for various cloud and aerosol types. There is a vast data record of derived lidar ratios for various cloud and aerosol types using 532nm from an array of instruments (i.e. HSRL, CALIOP, CPL, Aeronet, MPLNET). To date, because the 1064nm molecular signal is so small, lidar ratios at 1064nm have been mostly determined from 532nm lidar ratios using angstrom exponents, color ratios and ground based non-lidar measurements, as HSRL measurements at that wavelength do not exist. Due to the better signal quality at 1064nm compared to the 532nm signal, the CATS laser was thermally tuned to increase the 1064nm output energy. Therefore, the 1064nm channel is used in nearly all CATS layer data processing, making the accurate determination of 1064nm lidar ratio imperative. The CATS 1064nm signal allows for the unique capability to determine 1064nm lidar ratios better than previous instruments. The statistical and case study results of the CATS derived smoke and dust lidar ratios will be presented. Results have shown that the previously assumed 1064nm lidar ratios for dust need to be lowered. In addition to 1064nm lidar ratio results from the traditional transmission loss technique, results for aerosol layers above opaque water clouds from a method utilizing the depolarization ratio of the opaque cloud will be discussed. Incorporating this method into the CATS algorithms should increase the number of aerosol layers with constrained lidar ratio.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..257a2091Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..257a2091Y"><span>Pipe leak diagnostic using high frequency piezoelectric pressure sensor and automatic selection of intrinsic mode function</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yusop, Hanafi M.; Ghazali, M. F.; Yusof, M. F. M.; Remli, M. A. Pi; Kamarulzaman, M. H.</p> <p>2017-10-01</p> <p>In a recent study, the analysis of pressure transient signals could be seen as an accurate and low-cost method for leak and feature detection in water distribution systems. Transient phenomena occurs due to sudden changes in the fluid’s propagation in pipelines system caused by rapid pressure and flow fluctuation due to events such as closing and opening valves rapidly or through pump failure. In this paper, the feasibility of the Hilbert-Huang transform (HHT) method/technique in analysing the pressure transient signals in presented and discussed. HHT is a way to decompose a signal into intrinsic mode functions (IMF). However, the advantage of HHT is its difficulty in selecting the suitable IMF for the next data postprocessing method which is Hilbert Transform (HT). This paper reveals that utilizing the application of an integrated kurtosis-based algorithm for a z-filter technique (I-Kaz) to kurtosis ratio (I-Kaz-Kurtosis) allows/contributes to/leads to automatic selection of the IMF that should be used. This technique is demonstrated on a 57.90-meter medium high-density polyethylene (MDPE) pipe installed with a single artificial leak. The analysis results using the I-Kaz-kurtosis ratio revealed/confirmed that the method can be used as an automatic selection of the IMF although the noise level ratio of the signal is low. Therefore, the I-Kaz-kurtosis ratio method is recommended as a means to implement an automatic selection technique of the IMF for HHT analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24253368','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24253368"><span>Monitoring of urinary calcium and phosphorus excretion in preterm infants: comparison of 2 methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Staub, Eveline; Wiedmer, Nicolas; Staub, Lukas P; Nelle, Mathias; von Vigier, Rodo O</p> <p>2014-04-01</p> <p>Premature babies require supplementation with calcium (Ca) and phosphorus (P) to prevent metabolic bone disease of prematurity. To guide mineral supplementation, 2 methods of monitoring urinary excretion of Ca and P are used: urinary Ca or P concentration and Ca/creatinine (Crea) or P/Crea ratios. We compare these 2 methods in regards to their agreement on the need for mineral supplementation. Retrospective chart review of 230 premature babies with birth weight <1500 g, undergoing screening of urinary spot samples from day 21 of life and fortnightly thereafter. Hypothetical cutoff values for urine Ca or P concentration (1 mmol/L) and urine Ca/Crea ratio (0.5 mol/mol) or P/Crea ratio (4 mol/mol) were applied to the sample results. The agreement on whether to supplement the respective minerals based on the results with the 2 methods was compared. Multivariate general linear models sought to identify patient characteristics to predict discordant results. A total of 24.8% of cases did not agree on the indication for Ca supplementation, and 8.8% for P. Total daily Ca intake was the only patient characteristic associated with discordant results. With the intention to supplement the respective mineral, comparison of urinary mineral concentration with mineral/Crea ratio is moderate for Ca and good for P. The results do not allow identifying superiority of either method on the decision as to which babies require Ca and/or P supplements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23718605','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23718605"><span>Improved volumetric measurement of brain structure with a distortion correction procedure using an ADNI phantom.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi</p> <p>2013-06-01</p> <p>Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015StGM...36....3B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015StGM...36....3B"><span>Deep Compaction Control of Sandy Soils</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bałachowski, Lech; Kurek, Norbert</p> <p>2015-02-01</p> <p>Vibroflotation, vibratory compaction, micro-blasting or heavy tamping are typical improvement methods for the cohesionless deposits of high thickness. The complex mechanism of deep soil compaction is related to void ratio decrease with grain rearrangements, lateral stress increase, prestressing effect of certain number of load cycles, water pressure dissipation, aging and other effects. Calibration chamber based interpretation of CPTU/DMT can be used to take into account vertical and horizontal stress and void ratio effects. Some examples of interpretation of soundings in pre-treated and compacted sands are given. Some acceptance criteria for compaction control are discussed. The improvement factors are analysed including the normalised approach based on the soil behaviour type index.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JHyd..498..177W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JHyd..498..177W"><span>An information-theoretical perspective on weighted ensemble forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weijs, Steven V.; van de Giesen, Nick</p> <p>2013-08-01</p> <p>This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://pubs.water.usgs.gov/ofr97-118/','USGSPUBS'); return false;" href="http://pubs.water.usgs.gov/ofr97-118/"><span>Comparison of two methods for estimating discharge and nutrient loads from Tidally affected reaches of the Myakka and Peace Rivers, West-Central Florida</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Levesque, V.A.; Hammett, K.M.</p> <p>1997-01-01</p> <p>The Myakka and Peace River Basins constitute more than 60 percent of the total inflow area and contribute more than half the total tributary inflow to the Charlotte Harbor estuarine system. Water discharge and nutrient enrichment have been identified as significant concerns in the estuary, and consequently, it is important to accurately estimate the magnitude of discharges and nutrient loads transported by inflows from both rivers. Two methods for estimating discharge and nutrient loads from tidally affected reaches of the Myakka and Peace Rivers were compared. The first method was a tidal-estimation method, in which discharge and nutrient loads were estimated based on stage, water-velocity, discharge, and water-quality data collected near the mouths of the rivers. The second method was a traditional basin-ratio method in which discharge and nutrient loads at the mouths were estimated from discharge and loads measured at upstream stations. Stage and water-velocity data were collected near the river mouths by submersible instruments, deployed in situ, and discharge measurements were made with an acoustic Doppler current profiler. The data collected near the mouths of the Myakka River and Peace River were filtered, using a low-pass filter, to remove daily mixed-tide effects with periods less than about 2 days. The filtered data from near the river mouths were used to calculate daily mean discharge and nutrient loads. These tidal-estimation-method values were then compared to the basin-ratio-method values. Four separate 30-day periods of differing streamflow conditions were chosen for monitoring and comparison. Discharge and nutrient load estimates computed from the tidal-estimation and basin-ratio methods were most similar during high-flow periods. However, during high flow, the values computed from the tidal-estimation method for the Myakka and Peace Rivers were consistently lower than the values computed from the basin-ratio method. There were substantial differences between discharges and nutrient loads computed from the tidal-estimation and basin-ratio methods during low-flow periods. Furthermore, the differences between the methods were not consistent. Discharges and nutrient loads computed from the tidal-estimation method for the Myakka River were higher than those computed from the basin-ratio method, whereas discharges and nutrients loads computed by the tidal-estimation method for the Peace River were not only lower than those computed from the basin-ratio method, but they actually reflected a negative, or upstream, net movement. Short-term tidal measurement results should be used with caution, because antecedent conditions can influence the discharge and nutrient loads. Continuous tidal data collected over a 1- or 2-year period would be necessary to more accurately estimate the tidally affected discharge and nutrient loads for the Myakka and Peace River Basins.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21300999-numerical-black-hole-initial-data-low-eccentricity-based-post-newtonian-orbital-parameters','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21300999-numerical-black-hole-initial-data-low-eccentricity-based-post-newtonian-orbital-parameters"><span>Numerical black hole initial data with low eccentricity based on post-Newtonian orbital parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Walther, Benny; Bruegmann, Bernd; Mueller, Doreen</p> <p>2009-06-15</p> <p>Black hole binaries on noneccentric orbits form an important subclass of gravitational wave sources, but it is a nontrivial issue to construct numerical initial data with minimal initial eccentricity for numerical simulations. We compute post-Newtonian orbital parameters for quasispherical orbits using the method of Buonanno, Chen and Damour, (2006) and examine the resulting eccentricity in numerical simulations. Four different methods are studied resulting from the choice of Taylor-expanded or effective-one-body Hamiltonians, and from two choices for the energy flux. For equal-mass, nonspinning binaries the approach succeeds in obtaining low-eccentricity numerical initial data with an eccentricity of about e=0.002 for rathermore » small initial separations of D > or approx. 10M. The eccentricity increases for unequal masses and for spinning black holes, but remains smaller than that obtained from previous post-Newtonian approaches. The effective-one-body Hamiltonian offers advantages for decreasing initial separation as expected, but in the context of this study also performs significantly better than the Taylor-expanded Hamiltonian for binaries with spin. For mass ratio 4 ratio 1 and vanishing spin, the eccentricity reaches e=0.004. For mass ratio 1 ratio 1 and aligned spins of size 0.85M{sup 2} the eccentricity is about e=0.07 for the Taylor method and e=0.014 for the effective-one-body method.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23608324','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23608324"><span>Comparative study of label and label-free techniques using shotgun proteomics for relative protein quantification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sjödin, Marcus O D; Wetterhall, Magnus; Kultima, Kim; Artemenko, Konstantin</p> <p>2013-06-01</p> <p>The analytical performance of three different strategies, iTRAQ (isobaric tag for relative and absolute quantification), dimethyl labeling (DML) and label free (LF) for relative protein quantification using shotgun proteomics have been evaluated. The methods have been explored using samples containing (i) Bovine proteins in known ratios and (ii) Bovine proteins in known ratios spiked into Escherichia coli. The latter case mimics the actual conditions in a typical biological sample with a few differentially expressed proteins and a bulk of proteins with unchanged ratios. Additionally, the evaluation was performed on both QStar and LTQ-FTICR mass spectrometers. LF LTQ-FTICR was found to have the highest proteome coverage while the highest accuracy based on the artificially regulated proteins was found for DML LTQ-FTICR (54%). A varying linearity (k: 0.55-1.16, r(2): 0.61-0.96) was shown for all methods within selected dynamic ranges. All methods were found to consistently underestimate Bovine protein ratios when matrix proteins were added. However, LF LTQ-FTICR was more tolerant toward a compression effect. A single peptide was demonstrated to be sufficient for a reliable quantification using iTRAQ. A ranking system utilizing several parameters important for quantitative proteomics demonstrated that the overall performance of the five different methods was; DML LTQ-FTICR>iTRAQ QStar>LF LTQ-FTICR>DML QStar>LF QStar. Copyright © 2013 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1416283-measures-model-performance-based-log-accuracy-ratio','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1416283-measures-model-performance-based-log-accuracy-ratio"><span>Measures of model performance based on the log accuracy ratio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.</p> <p></p> <p>Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1416283-measures-model-performance-based-log-accuracy-ratio','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1416283-measures-model-performance-based-log-accuracy-ratio"><span>Measures of model performance based on the log accuracy ratio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.</p> <p>2018-01-03</p> <p>Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10422E..0XC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10422E..0XC"><span>Ocean subsurface particulate backscatter estimation from CALIPSO spaceborne lidar measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Peng; Pan, Delu; Wang, Tianyu; Mao, Zhihua</p> <p>2017-10-01</p> <p>A method for ocean subsurface particulate backscatter estimation from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite was demonstrated. The effects of the CALIOP receiver's transient response on the attenuated backscatter profile were first removed. The two-way transmittance of the overlying atmosphere was then estimated as the ratio of the measured ocean surface attenuated backscatter to the theoretical value computed from wind driven wave slope variance. Finally, particulate backscatter was estimated from the depolarization ratio as the ratio of the column-integrated cross-polarized and co-polarized channels. Statistical results show that the derived particulate backscatter by the method based on CALIOP data agree reasonably well with chlorophyll-a concentration using MODIS data. It indicates a potential use of space-borne lidar to estimate global primary productivity and particulate carbon stock.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28662053','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28662053"><span>Tire-road friction estimation and traction control strategy for motorized electric vehicle.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jin, Li-Qiang; Ling, Mingze; Yue, Weiqiang</p> <p>2017-01-01</p> <p>In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1082710','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1082710"><span>Method and system for operating an electric motor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Gallegos-Lopez, Gabriel; Hiti, Silva; Perisic, Milun</p> <p>2013-01-22</p> <p>Methods and systems for operating an electric motor having a plurality of windings with an inverter having a plurality of switches coupled to a voltage source are provided. A first plurality of switching vectors is applied to the plurality of switches. The first plurality of switching vectors includes a first ratio of first magnitude switching vectors to second magnitude switching vectors. A direct current (DC) current associated with the voltage source is monitored during the applying of the first plurality of switching vectors to the plurality of switches. A second ratio of the first magnitude switching vectors to the second magnitude switching vectors is selected based on the monitoring of the DC current associated with the voltage source. A second plurality of switching vectors is applied to the plurality of switches. The second plurality of switching vectors includes the second ratio of the first magnitude switching vectors to the second magnitude switching vectors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1040007','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1040007"><span>Methods and catalysts for making biodiesel from the transesterification and esterification of unrefined oils</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Yan, Shuli [Detroit, MI; Salley, Steven O [Grosse Pointe Park, MI; Ng, K Y. Simon [West Bloomfield, MI</p> <p>2012-04-24</p> <p>A method of forming a biodiesel product and a heterogeneous catalyst system used to form said product that has a high tolerance for the presence of water and free fatty acids (FFA) in the oil feedstock is disclosed. This catalyst system may simultaneously catalyze both the esterification of FAA and the transesterification of triglycerides present in the oil feedstock. The catalyst system according to one aspect of the present disclosure represents a class of zinc and lanthanum oxide heterogeneous catalysts that include different ratios of zinc oxide to lanthanum oxides (Zn:La ratio) ranging from about 10:0 to 0:10. The Zn:La ratio in the catalyst is believed to have an effect on the number and reactivity of Lewis acid and base sites, as well as the transesterification of glycerides, the esterification of fatty acids, and the hydrolysis of glycerides and biodiesel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5491023','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5491023"><span>Tire-road friction estimation and traction control strategy for motorized electric vehicle</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jin, Li-Qiang; Yue, Weiqiang</p> <p>2017-01-01</p> <p>In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS). PMID:28662053</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26387882','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26387882"><span>Meta-analysis for aggregated survival data with competing risks: a parametric approach using cumulative incidence functions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bonofiglio, Federico; Beyersmann, Jan; Schumacher, Martin; Koller, Michael; Schwarzer, Guido</p> <p>2016-09-01</p> <p>Meta-analysis of a survival endpoint is typically based on the pooling of hazard ratios (HRs). If competing risks occur, the HRs may lose translation into changes of survival probability. The cumulative incidence functions (CIFs), the expected proportion of cause-specific events over time, re-connect the cause-specific hazards (CSHs) to the probability of each event type. We use CIF ratios to measure treatment effect on each event type. To retrieve information on aggregated, typically poorly reported, competing risks data, we assume constant CSHs. Next, we develop methods to pool CIF ratios across studies. The procedure computes pooled HRs alongside and checks the influence of follow-up time on the analysis. We apply the method to a medical example, showing that follow-up duration is relevant both for pooled cause-specific HRs and CIF ratios. Moreover, if all-cause hazard and follow-up time are large enough, CIF ratios may reveal additional information about the effect of treatment on the cumulative probability of each event type. Finally, to improve the usefulness of such analysis, better reporting of competing risks data is needed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017A%26A...599A..63S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017A%26A...599A..63S"><span>H12CN and H13CN excitation analysis in the circumstellar outflow of R Sculptoris</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saberi, M.; Maercker, M.; De Beck, E.; Vlemmings, W. H. T.; Olofsson, H.; Danilovich, T.</p> <p>2017-03-01</p> <p>Context. The 12CO/13CO isotopologue ratio in the circumstellar envelope (CSE) of asymptotic giant branch (AGB) stars has been extensively used as the tracer of the photospheric 12C/13C ratio. However, spatially-resolved ALMA observations of R Scl, a carbon rich AGB star, have shown that the 12CO/13CO ratio is not consistent over the entire CSE. Hence, it can not necessarily be used as a tracer of the 12C/13C ratio. The most likely hypothesis to explain the observed discrepancy between the 12CO/13CO and 12C/13C ratios is CO isotopologue selective photodissociation by UV radiation. Unlike the CO isotopologue ratio, the HCN isotopologue ratio is not affected by UV radiation. Therefore, HCN isotopologue ratios can be used as the tracer of the atomic C ratio in UV irradiated regions. Aims: We aim to present ALMA observations of H13CN(4-3) and APEX observations of H12CN(2-1), H13CN(2-1, 3-2) towards R Scl. These new data, combined with previously published observations, are used to determine abundances, ratio, and the sizes of line-emitting regions of the aforementioned HCN isotopologues. Methods: We have performed a detailed non-LTE excitation analysis of circumstellar H12CN(J = 1-0, 2-1, 3-2, 4-3) and H13CN(J = 2-1, 3-2, 4-3) line emission around R Scl using a radiative transfer code based on the accelerated lambda iteration (ALI) method. The spatial extent of the molecular distribution for both isotopologues is constrained based on the spatially resolved H13CN(4-3) ALMA observations. Results: We find fractional abundances of H12CN/H2 = (5.0 ± 2.0) × 10-5 and H13CN/H2 = (1.9 ± 0.4) × 10-6 in the inner wind (r ≤ (2.0 ± 0.25) ×1015 cm) of R Scl. The derived circumstellar isotopologue ratio of H12CN/H13CN = 26.3 ± 11.9 is consistent with the photospheric ratio of 12C/13C 19 ± 6. Conclusions: We show that the circumstellar H12CN/H13CN ratio traces the photospheric 12C/13C ratio. Hence, contrary to the 12CO/13CO ratio, the H12CN/H13CN ratio is not affected by UV radiation. These results support the previously proposed explanation that CO isotopologue selective-shielding is the main factor responsible for the observed discrepancy between 12C/13C and 12CO/13CO ratios in the inner CSE of R Scl. This indicates that UV radiation impacts on the CO isotopologue ratio. This study shows how important is to have high-resolution data on molecular line brightness distribution in order to perform a proper radiative transfer modelling. This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX). APEX is a collaboration between the Max-Planck-Institut fur Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/870809','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/870809"><span>Real-time combustion controller</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Lindner, Jeffrey S.; Shepard, W. Steve; Etheridge, John A.; Jang, Ping-Rey; Gresham, Lawrence L.</p> <p>1997-01-01</p> <p>A method and system of regulating the air to fuel ratio supplied to a burner to maximize the combustion efficiency. Optical means are provided in close proximity to the burner for directing a beam of radiation from hot gases produced by the burner to a plurality of detectors. Detectors are provided for sensing the concentration of, inter alia, CO, CO.sub.2, and H.sub.2 O. The differences between the ratios of CO to CO.sub.2 and H.sub.2 O to CO are compared with a known control curve based on those ratios for air to fuel ratios ranging from 0.85 to 1.30. The fuel flow is adjusted until the difference between the ratios of CO to CO.sub.2 and H.sub.2 O to CO fall on a desired set point on the control curve.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>