Sample records for parameter analysis convention

  1. Novel application of parameters in waveform contour analysis for assessing arterial stiffness in aged and atherosclerotic subjects.

    PubMed

    Wu, Hsien-Tsai; Liu, Cyuan-Cin; Lin, Po-Hsun; Chung, Hui-Ming; Liu, Ming-Chien; Yip, Hon-Kan; Liu, An-Bang; Sun, Cheuk-Kwan

    2010-11-01

    Although contour analysis of pulse waves has been proposed as a non-invasive means in assessing arterial stiffness in atherosclerosis, accurate determination of the conventional parameters is usually precluded by distorted waveforms in the aged and atherosclerotic objects. We aimed at testing reliable indices in these patient populations. Digital volume pulse (DVP) curve was obtained from 428 subjects recruited from a health screening program at a single medical center from January 2007 to July 2008. Demographic data, blood pressure, and conventional parameters for contour analysis including pulse wave velocity (PWV), crest time (CT), stiffness index (SI), and reflection index (RI) were recorded. Two indices including normalized crest time (NCT) and crest time ratio (CTR) were also analysed and compared with the known parameters. Though ambiguity of dicrotic notch precluded an accurate determination of the two key conventional parameters for assessing arterial stiffness (i.e. SI and RI), NCT and CTR were unaffected because the sum of CT and T(DVP) (i.e. the duration between the systolic and diastolic peak) tended to remain constant. NCT and CTR also correlated significantly with age, systolic and diastolic blood pressure, PWV, SI and RI (all P<0.01). NCT and CTR not only showed significant positive correlations with the conventional parameters for assessment of atherosclerosis (i.e. SI, RI, and PWV), but they also are of particular value in assessing degree of arterial stiffness in subjects with indiscernible peak of diastolic wave that precludes the use of conventional parameters in waveform contour analysis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Diagnostic performance of conventional MRI parameters and apparent diffusion coefficient values in differentiating between benign and malignant soft-tissue tumours.

    PubMed

    Song, Y; Yoon, Y C; Chong, Y; Seo, S W; Choi, Y-L; Sohn, I; Kim, M-J

    2017-08-01

    To compare the abilities of conventional magnetic resonance imaging (MRI) and apparent diffusion coefficient (ADC) in differentiating between benign and malignant soft-tissue tumours (STT). A total of 123 patients with STT who underwent 3 T MRI, including diffusion-weighted imaging (DWI), were retrospectively analysed using variate conventional MRI parameters, ADC mean and ADC min . For the all-STT group, the correlation between the malignant STT conventional MRI parameters, except deep compartment involvement, compared to those of benign STT were statistically significant with univariate analysis. Maximum diameter of the tumour (p=0.001; odds ratio [OR], 8.97) and ADC mean (p=0.020; OR, 4.30) were independent factors with multivariate analysis. For the non-myxoid non-haemosiderin STT group, signal heterogeneity on axial T1-weighted imaging (T1WI; p=0.017), ADC mean , and ADC min (p=0.001, p=0.001), showed significant differences with univariate analysis between malignancy and benignity. Signal heterogeneity in axial T1WI (p=0.025; OR, 12.64) and ADC mean (p=0.004; OR, 33.15) were independent factors with multivariate analysis. ADC values as well as conventional MRI parameters were useful in differentiating between benign and malignant STT. The ADC mean was the most powerful diagnostic parameter in non-myxoid non-haemosiderin STT. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  3. Deviation Value for Conventional X-ray in Hospitals in South Sulawesi Province from 2014 to 2016

    NASA Astrophysics Data System (ADS)

    Bachtiar, Ilham; Abdullah, Bualkar; Tahir, Dahlan

    2018-03-01

    This paper describes the conventional X-ray machine parameters tested in the region of South Sulawesi from 2014 to 2016. The objective of this research is to know deviation of every parameter of conventional X-ray machine. The testing parameters were analyzed by using quantitative methods with participatory observational approach. Data collection was performed by testing the output of conventional X-ray plane using non-invasive x-ray multimeter. The test parameters include tube voltage (kV) accuracy, radiation output linearity, reproducibility and radiation beam value (HVL) quality. The results of the analysis show four conventional X-ray test parameters have varying deviation spans, where the tube voltage (kV) accuracy has an average value of 4.12%, the average radiation output linearity is 4.47% of the average reproducibility of 0.62% and the averaged of the radiation beam (HVL) is 3.00 mm.

  4. Reversible heart rhythm complexity impairment in patients with primary aldosteronism

    NASA Astrophysics Data System (ADS)

    Lin, Yen-Hung; Wu, Vin-Cent; Lo, Men-Tzung; Wu, Xue-Ming; Hung, Chi-Sheng; Wu, Kwan-Dun; Lin, Chen; Ho, Yi-Lwun; Stowasser, Michael; Peng, Chung-Kang

    2015-08-01

    Excess aldosterone secretion in patients with primary aldosteronism (PA) impairs their cardiovascular system. Heart rhythm complexity analysis, derived from heart rate variability (HRV), is a powerful tool to quantify the complex regulatory dynamics of human physiology. We prospectively analyzed 20 patients with aldosterone producing adenoma (APA) that underwent adrenalectomy and 25 patients with essential hypertension (EH). The heart rate data were analyzed by conventional HRV and heart rhythm complexity analysis including detrended fluctuation analysis (DFA) and multiscale entropy (MSE). We found APA patients had significantly decreased DFAα2 on DFA analysis and decreased area 1-5, area 6-15, and area 6-20 on MSE analysis (all p < 0.05). Area 1-5, area 6-15, area 6-20 in the MSE study correlated significantly with log-transformed renin activity and log-transformed aldosterone-renin ratio (all p < = 0.01). The conventional HRV parameters were comparable between PA and EH patients. After adrenalectomy, all the altered DFA and MSE parameters improved significantly (all p < 0.05). The conventional HRV parameters did not change. Our result suggested that heart rhythm complexity is impaired in APA patients and this is at least partially reversed by adrenalectomy.

  5. Application of a trigonometric finite difference procedure to numerical analysis of compressive and shear buckling of orthotropic panels

    NASA Technical Reports Server (NTRS)

    Stein, M.; Housner, J. D.

    1978-01-01

    A numerical analysis developed for the buckling of rectangular orthotropic layered panels under combined shear and compression is described. This analysis uses a central finite difference procedure based on trigonometric functions instead of using the conventional finite differences which are based on polynomial functions. Inasmuch as the buckle mode shape is usually trigonometric in nature, the analysis using trigonometric finite differences can be made to exhibit a much faster convergence rate than that using conventional differences. Also, the trigonometric finite difference procedure leads to difference equations having the same form as conventional finite differences; thereby allowing available conventional finite difference formulations to be converted readily to trigonometric form. For two-dimensional problems, the procedure introduces two numerical parameters into the analysis. Engineering approaches for the selection of these parameters are presented and the analysis procedure is demonstrated by application to several isotropic and orthotropic panel buckling problems. Among these problems is the shear buckling of stiffened isotropic and filamentary composite panels in which the stiffener is broken. Results indicate that a break may degrade the effect of the stiffener to the extent that the panel will not carry much more load than if the stiffener were absent.

  6. Comprehensive non-dimensional normalization of gait data.

    PubMed

    Pinzone, Ornella; Schwartz, Michael H; Baker, Richard

    2016-02-01

    Normalizing clinical gait analysis data is required to remove variability due to physical characteristics such as leg length and weight. This is particularly important for children where both are associated with age. In most clinical centres conventional normalization (by mass only) is used whereas there is a stronger biomechanical argument for non-dimensional normalization. This study used data from 82 typically developing children to compare how the two schemes performed over a wide range of temporal-spatial and kinetic parameters by calculating the coefficients of determination with leg length, weight and height. 81% of the conventionally normalized parameters had a coefficient of determination above the threshold for a statistical association (p<0.05) compared to 23% of those normalized non-dimensionally. All the conventionally normalized parameters exceeding this threshold showed a reduced association with non-dimensional normalization. In conclusion, non-dimensional normalization is more effective that conventional normalization in reducing the effects of height, weight and age in a comprehensive range of temporal-spatial and kinetic parameters. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei

    2016-03-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.

  8. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  9. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  10. [Nitrogen status diagnosis of rice by using a digital camera].

    PubMed

    Jia, Liang-Liang; Fan, Ming-Sheng; Zhang, Fu-Suo; Chen, Xin-Ping; Lü, Shi-Hua; Sun, Yan-Ming

    2009-08-01

    In the present research, a field experiment with different N application rate was conducted to study the possibility of using visible band color analysis methods to monitor the N status of rice canopy. The Correlations of visible spectrum band color intensity between rice canopy image acquired from a digital camera and conventional nitrogen status diagnosis parameters of leaf SPAD chlorophyll meter readings, total N content, upland biomass and N uptake were studied. The results showed that the red color intensity (R), green color intensity (G) and normalized redness intensity (NRI) have significant inverse linear correlations with the conventional N diagnosis parameters of SPAD readings, total N content, upland biomass and total N uptake. The correlation coefficient values (r) were from -0.561 to -0.714 for red band (R), from -0.452 to -0.505 for green band (G), and from -0.541 to 0.817 for normalized redness intensity (NRI). But the normalized greenness intensity (NGI) showed a significant positive correlation with conventional N parameters and the correlation coefficient values (r) were from 0.505 to 0.559. Compared with SPAD readings, the normalized redness intensity (NRI), with a high r value of 0.541-0.780 with conventional N parameters, could better express the N status of rice. The digital image color analysis method showed the potential of being used in rice N status diagnosis in the future.

  11. Relation of thromboelastography parameters to conventional coagulation tests used to evaluate the hypercoagulable state of aged fracture patients

    PubMed Central

    Liu, Chen; Guan, Zhao; Xu, Qinzhu; Zhao, Lei; Song, Ying; Wang, Hui

    2016-01-01

    Abstract Fractures are common among aged people, and rapid assessment of the coagulation status is important. The thromboelastography (TEG) test can give a series of coagulation parameters and has been widely used in clinics. In this research, we looked at fracture patients over 60 and compared their TEG results with those of healthy controls. Since there is a paucity of studies comparing TEG assessments with conventional coagulation tests, we aim to clarify the relationship between TEG values and the values given by conventional coagulation tests. Forty fracture patients (27 femur and 13 humerus) over 60 years old were included in the study. The change in their coagulation status was evaluated by TEG before surgery within 4 hours after the fracture. Changes in TEG parameters were analyzed compared with controls. Conventional coagulation test results for the patients, including activated partial thromboplastin time (APTT), international normalized ratio (INR), fibrinogen, and platelets, were also acquired, and correlation analysis was done with TEG parameters, measuring similar aspects of the coagulation cascade. In addition, the sensitivity and specificity of TEG parameters for detecting raised fibrinogen levels were also analyzed. The K (time to 20 mm clot amplitude) and R (reaction time) values of aged fracture patients were lower than controls. The values for angle, maximal amplitude (MA), and coagulation index (CI) were raised compared with controls, indicating a hypercoagulable state. Correlation analysis showed that there were significant positive correlations between fibrinogen and MA/angle, between platelets and MA, and between APTT and R as well. There was significant negative correlation between fibrinogen and K. In addition, K values have better sensitivity and specificity for detecting elevated fibrinogen concentration than angle and MA values. Aged fracture patients tend to be in a hypercoagulable state, and this could be effectively reflected by a TEG test. There were correlations between TEG parameters and corresponding conventional tests. K values can better predict elevated fibrinogen levels in aged fracture patients. PMID:27311005

  12. Relation of thromboelastography parameters to conventional coagulation tests used to evaluate the hypercoagulable state of aged fracture patients.

    PubMed

    Liu, Chen; Guan, Zhao; Xu, Qinzhu; Zhao, Lei; Song, Ying; Wang, Hui

    2016-06-01

    Fractures are common among aged people, and rapid assessment of the coagulation status is important. The thromboelastography (TEG) test can give a series of coagulation parameters and has been widely used in clinics. In this research, we looked at fracture patients over 60 and compared their TEG results with those of healthy controls. Since there is a paucity of studies comparing TEG assessments with conventional coagulation tests, we aim to clarify the relationship between TEG values and the values given by conventional coagulation tests.Forty fracture patients (27 femur and 13 humerus) over 60 years old were included in the study. The change in their coagulation status was evaluated by TEG before surgery within 4 hours after the fracture. Changes in TEG parameters were analyzed compared with controls. Conventional coagulation test results for the patients, including activated partial thromboplastin time (APTT), international normalized ratio (INR), fibrinogen, and platelets, were also acquired, and correlation analysis was done with TEG parameters, measuring similar aspects of the coagulation cascade. In addition, the sensitivity and specificity of TEG parameters for detecting raised fibrinogen levels were also analyzed.The K (time to 20 mm clot amplitude) and R (reaction time) values of aged fracture patients were lower than controls. The values for angle, maximal amplitude (MA), and coagulation index (CI) were raised compared with controls, indicating a hypercoagulable state. Correlation analysis showed that there were significant positive correlations between fibrinogen and MA/angle, between platelets and MA, and between APTT and R as well. There was significant negative correlation between fibrinogen and K. In addition, K values have better sensitivity and specificity for detecting elevated fibrinogen concentration than angle and MA values.Aged fracture patients tend to be in a hypercoagulable state, and this could be effectively reflected by a TEG test. There were correlations between TEG parameters and corresponding conventional tests. K values can better predict elevated fibrinogen levels in aged fracture patients.

  13. Fibre Optic Sensors for Selected Wastewater Characteristics

    PubMed Central

    Chong, Su Sin; Abdul Aziz, A. R.; Harun, Sulaiman W.

    2013-01-01

    Demand for online and real-time measurements techniques to meet environmental regulation and treatment compliance are increasing. However the conventional techniques, which involve scheduled sampling and chemical analysis can be expensive and time consuming. Therefore cheaper and faster alternatives to monitor wastewater characteristics are required as alternatives to conventional methods. This paper reviews existing conventional techniques and optical and fibre optic sensors to determine selected wastewater characteristics which are colour, Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD). The review confirms that with appropriate configuration, calibration and fibre features the parameters can be determined with accuracy comparable to conventional method. With more research in this area, the potential for using FOS for online and real-time measurement of more wastewater parameters for various types of industrial effluent are promising. PMID:23881131

  14. Diagnostic accuracy and functional parameters of myocardial perfusion scintigraphy using accelerated cardiac acquisition with IQ SPECT technique in comparison to conventional imaging.

    PubMed

    Pirich, Christian; Keinrath, Peter; Barth, Gabriele; Rendl, Gundula; Rettenbacher, Lukas; Rodrigues, Margarida

    2017-03-01

    IQ SPECT consists of a new pinhole-like collimator, cardio-centric acquisition, and advanced 3D iterative SPECT reconstruction. The aim of this paper was to compare diagnostic accuracy and functional parameters obtained with IQ SPECT versus conventional SPECT in patients undergoing myocardial perfusion scintigraphy with adenosine stress and at rest. Eight patients with known or suspected coronary artery disease underwent [99mTc] tetrofosmin gated SPECT. Acquisition was performed on a Symbia T6 equipped with IQ SPECT and on a conventional gamma camera system. Gated SPECT data were used to calculate functional parameters. Scores analysis was performed on a 17-segment model. Coronary angiography and clinical follow-up were considered as diagnostic reference standard. Mean acquisition time was 4 minutes with IQ SPECT and 21 minutes with conventional SPECT. Agreement degree on the diagnostic accuracy between both systems was 0.97 for stress studies, 0.91 for rest studies and 0.96 for both studies. Perfusion abnormalities scores obtained by using IQ SPECT and conventional SPECT were not significant different: SSS, 9.7±8.8 and 10.1±6.4; SRS, 7.1±6.1 and 7.5±7.3; SDS, 4.0±6.1 and 3.9±4.3, respectively. However, a significant difference was found in functional parameters derived from IQ SPECT and conventional SPECT both after stress and at rest. Mean LVEF was 8% lower using IQ SPECT. Differences in LVEF were found in patients with normal LVEF and patients with reduced LVEF. Functional parameters using accelerated cardiac acquisition with IQ SPECT are significantly different to those obtained with conventional SPECT, while agreement for clinical interpretation of myocardial perfusion scintigraphy with both techniques is high.

  15. Botanical discrimination of Greek unifloral honeys with physico-chemical and chemometric analyses.

    PubMed

    Karabagias, Ioannis K; Badeka, Anastasia V; Kontakos, Stavros; Karabournioti, Sofia; Kontominas, Michael G

    2014-12-15

    The aim of the present study was to investigate the possibility of characterisation and classification of Greek unifloral honeys (pine, thyme, fir and orange blossom) according to botanical origin using volatile compounds, conventional physico-chemical parameters and chemometric analyses (MANOVA and Linear Discriminant Analysis). For this purpose, 119 honey samples were collected during the harvesting period 2011 from 14 different regions in Greece known to produce unifloral honey of good quality. Physico-chemical analysis included the identification and semi quantification of fifty five volatile compounds performed by Headspace Solid Phase Microextraction coupled to gas chromatography/mass spectroscopy and the determination of conventional quality parameters such as pH, free, lactonic, total acidity, electrical conductivity, moisture, ash, lactonic/free acidity ratio and colour parameters L, a, b. Results showed that using 40 diverse variables (30 volatile compounds of different classes and 10 physico-chemical parameters) the honey samples were satisfactorily classified according to botanical origin using volatile compounds (84.0% correct prediction), physicochemical parameters (97.5% correct prediction), and the combination of both (95.8% correct prediction) indicating that multi element analysis comprises a powerful tool for honey discrimination purposes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Impact of Cultivation Conditions, Ethylene Treatment, and Postharvest Storage on Selected Quality and Bioactivity Parameters of Kiwifruit "Hayward" Evaluated by Analytical and Chemometric Methods.

    PubMed

    Park, Yong Seo; Polovka, Martin; Ham, Kyung-Sik Ham; Park, Yang-Kyun; Vearasilp, Suchada; Namieśnik, Jacek; Toledo, Fernando; Arancibia-Avila, Patricia; Gorinstein, Shela

    2016-09-01

    Organic, semiorganic, and conventional "Hayward" kiwifruits, treated with ethylene for 24 h and stored during 10 days, were assessed by UV spectrometry, fluorometry, and chemometrical analysis for changes in selected characteristics of quality (firmness, dry matter and soluble solid contents, pH, and acidity) and bioactivity (concentration of polyphenols via Folin-Ciocalteu and p-hydroxybenzoic acid assays). All of the monitored qualitative parameters and characteristics related to bioactivity were affected either by cultivation practices or by ethylene treatment and storage. Results obtained, supported by statistical evaluation (Friedman two-way ANOVA) and chemometric analysis, clearly proved that the most significant impact on the majority of the evaluated parameters of quality and bioactivity of "Hayward" kiwifruit had the ethylene treatment followed by the cultivation practices and the postharvest storage. Total concentration of polyphenols expressed via p-hydroxybenzoic acid assay exhibited the most significant sensitivity to all three evaluated parameters, reaching a 16.5% increase for fresh organic compared to a conventional control sample. As a result of postharvest storage coupled with ethylene treatment, the difference increased to 26.3%. Three-dimensional fluorescence showed differences in the position of the main peaks and their fluorescence intensity for conventional, semiorganic, and organic kiwifruits in comparison with ethylene nontreated samples.

  17. Can Functional Cardiac Age be Predicted from ECG in a Normal Healthy Population

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd; Starc, Vito; Leban, Manja; Sinigoj, Petra; Vrhovec, Milos

    2011-01-01

    In a normal healthy population, we desired to determine the most age-dependent conventional and advanced ECG parameters. We hypothesized that changes in several ECG parameters might correlate with age and together reliably characterize the functional age of the heart. Methods: An initial study population of 313 apparently healthy subjects was ultimately reduced to 148 subjects (74 men, 84 women, in the range from 10 to 75 years of age) after exclusion criteria. In all subjects, ECG recordings (resting 5-minute 12-lead high frequency ECG) were evaluated via custom software programs to calculate up to 85 different conventional and advanced ECG parameters including beat-to-beat QT and RR variability, waveform complexity, and signal-averaged, high-frequency and spatial/spatiotemporal ECG parameters. The prediction of functional age was evaluated by multiple linear regression analysis using the best 5 univariate predictors. Results: Ignoring what were ultimately small differences between males and females, the functional age was found to be predicted (R2= 0.69, P < 0.001) from a linear combination of 5 independent variables: QRS elevation in the frontal plane (p<0.001), a new repolarization parameter QTcorr (p<0.001), mean high frequency QRS amplitude (p=0.009), the variability parameter % VLF of RRV (p=0.021) and the P-wave width (p=0.10). Here, QTcorr represents the correlation between the calculated QT and the measured QT signal. Conclusions: In apparently healthy subjects with normal conventional ECGs, functional cardiac age can be estimated by multiple linear regression analysis of mostly advanced ECG results. Because some parameters in the regression formula, such as QTcorr, high frequency QRS amplitude and P-wave width also change with disease in the same direction as with increased age, increased functional age of the heart may reflect subtle age-related pathologies in cardiac electrical function that are usually hidden on conventional ECG.

  18. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    EPA Science Inventory

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  19. Eight weeks of a combination of high intensity interval training and conventional training reduce visceral adiposity and improve physical fitness: a group-based intervention.

    PubMed

    Giannaki, Christoforos D; Aphamis, George; Sakkis, Panikos; Hadjicharalambous, Marios

    2016-04-01

    High intensity interval training (HIIT) has been recently promoted as an effective, low volume and time-efficient training method for improving fitness and health related parameters. The aim of the current study was to examine the effect of a combination of a group-based HIIT and conventional gym training on physical fitness and body composition parameters in healthy adults. Thirty nine healthy adults volunteered to participate in this eight-week intervention study. Twenty three participants performed regular gym training 4 days a week (C group), whereas the remaining 16 participants engaged twice a week in HIIT and twice in regular gym training (HIIT-C group) as the other group. Total body fat and visceral adiposity levels were calculated using bioelectrical impedance analysis. Physical fitness parameters such as cardiorespiratory fitness, speed, lower limb explosiveness, flexibility and isometric arm strength were assessed through a battery of field tests. Both exercise programs were effective in reducing total body fat and visceral adiposity (P<0.05) and improving handgrip strength, sprint time, jumping ability and flexibility (P<0.05) whilst only the combination of HIIT and conventional training improved cardiorespiratory fitness levels (P<0.05). A between of group changes analysis revealed that HIIT-C resulted in significantly greater reduction in both abdominal girth and visceral adiposity compared with conventional training (P<0.05). Eight weeks of combined group-based HIIT and conventional training improve various physical fitness parameters and reduce both total and visceral fat levels. This type of training was also found to be superior compared with conventional exercise training alone in terms of reducing more visceral adiposity levels. Group-based HIIT may consider as a good methods for individuals who exercise in gyms and craving to acquire significant fitness benefits in relatively short period of time.

  20. Dosimetric quality endpoints for low-dose-rate prostate brachytherapy using biological effective dose (bed) vs. conventional dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Rachana; Al-Hallaq, Hania; Pelizzari, Charles A.

    2003-12-31

    The purpose of this study was to compare conventional low-dose-rate prostate brachytherapy dosimetric quality parameters with their biological effective dose (BED) counterparts. To validate a model for transformation from conventional dose to BED, the postimplant plans of 31 prostate brachytherapy patients were evaluated using conventional dose-volume histogram (DVH) quality endpoints and analogous BED-DVH endpoints. Based on CT scans obtained 4 weeks after implantation, DVHs were computed and standard dosimetric endpoints V100 (volume receiving 100% of the prescribed dose), V150, V200, HI (1-[V150/V100]), and D90 (dose that 90% of the target volume received) were obtained for quality analysis. Using known andmore » reported transformations, dose grids were transformed to BED-early ({alpha}/{beta} = 10 Gy) and BED-late ({alpha}/{beta} = 3 Gy) grids, and the same dosimetric endpoints were analyzed. For conventional, BED-early and BED-late DVHs, no differences in V100 were seen (0.896, 0.893, and 0.894, respectively). However, V150 and V200 were significantly higher for both BED-early (0.582 and 0.316) and BED-late (0.595 and 0.337), compared with the conventional (0.539 and 0.255) DVHs. D90 was significantly lower for the BED-early (103.1 Gy) and BED-late transformations (106.9 Gy) as compared with the conventional (119.5 Gy) DVHs. The conventional prescription parameter V100 is the same for the corresponding BED-early and BED-late transformed DVHs. The toxicity parameters V150 and V200 are slightly higher using the BED transformations, suggesting that the BED doses are somewhat higher than predicted using conventional DVHs. The prescription/quality parameter D90 is slightly lower, implying that target coverage is lower than predicted using conventional DVHs. This methodology can be applied to analyze BED dosimetric endpoints to improve clinical outcome and reduce complications of prostate brachytherapy.« less

  1. Comparison of linear measurements between CBCT orthogonally synthesized cephalograms and conventional cephalograms

    PubMed Central

    Yang, S; Liu, D G

    2014-01-01

    Objectives: The purposes of the study are to investigate the consistency of linear measurements between CBCT orthogonally synthesized cephalograms and conventional cephalograms and to evaluate the influence of different magnifications on these comparisons based on a simulation algorithm. Methods: Conventional cephalograms and CBCT scans were taken on 12 dry skulls with spherical metal markers. Orthogonally synthesized cephalograms were created from CBCT data. Linear parameters on both cephalograms were measured via Photoshop CS v. 5.0 (Adobe® Systems, San Jose, CA), named measurement group (MG). Bland–Altman analysis was utilized to assess the agreement of two imaging modalities. Reproducibility was investigated using paired t-test. By a specific mathematical programme “cepha”, corresponding linear parameters [mandibular corpus length (Go-Me), mandibular ramus length (Co-Go), posterior facial height (Go-S)] on these two types of cephalograms were calculated, named simulation group (SG). Bland–Altman analysis was used to assess the agreement between MG and SG. Simulated linear measurements with varying magnifications were generated based on “cepha” as well. Bland–Altman analysis was used to assess the agreement of simulated measurements between two modalities. Results: Bland–Altman analysis suggested the agreement between measurements on conventional cephalograms and orthogonally synthesized cephalograms, with a mean bias of 0.47 mm. Comparison between MG and SG showed that the difference did not reach clinical significance. The consistency between simulated measurements of both modalities with four different magnifications was demonstrated. Conclusions: Normative data of conventional cephalograms could be used for CBCT orthogonally synthesized cephalograms during this transitional period. PMID:25029593

  2. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  3. Bayesian Propensity Score Analysis: Simulation and Case Study

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Cassie J. S.

    2011-01-01

    Propensity score analysis (PSA) has been used in a variety of settings, such as education, epidemiology, and sociology. Most typically, propensity score analysis has been implemented within the conventional frequentist perspective of statistics. This perspective, as is well known, does not account for uncertainty in either the parameters of the…

  4. Differentiation of orbital lymphoma and idiopathic orbital inflammatory pseudotumor: combined diagnostic value of conventional MRI and histogram analysis of ADC maps.

    PubMed

    Ren, Jiliang; Yuan, Ying; Wu, Yingwei; Tao, Xiaofeng

    2018-05-02

    The overlap of morphological feature and mean ADC value restricted clinical application of MRI in the differential diagnosis of orbital lymphoma and idiopathic orbital inflammatory pseudotumor (IOIP). In this paper, we aimed to retrospectively evaluate the combined diagnostic value of conventional magnetic resonance imaging (MRI) and whole-tumor histogram analysis of apparent diffusion coefficient (ADC) maps in the differentiation of the two lesions. In total, 18 patients with orbital lymphoma and 22 patients with IOIP were included, who underwent both conventional MRI and diffusion weighted imaging before treatment. Conventional MRI features and histogram parameters derived from ADC maps, including mean ADC (ADC mean ), median ADC (ADC median ), skewness, kurtosis, 10th, 25th, 75th and 90th percentiles of ADC (ADC 10 , ADC 25 , ADC 75 , ADC 90 ) were evaluated and compared between orbital lymphoma and IOIP. Multivariate logistic regression analysis was used to identify the most valuable variables for discriminating. Differential model was built upon the selected variables and receiver operating characteristic (ROC) analysis was also performed to determine the differential ability of the model. Multivariate logistic regression showed ADC 10 (P = 0.023) and involvement of orbit preseptal space (P = 0.029) were the most promising indexes in the discrimination of orbital lymphoma and IOIP. The logistic model defined by ADC 10 and involvement of orbit preseptal space was built, which achieved an AUC of 0.939, with sensitivity of 77.30% and specificity of 94.40%. Conventional MRI feature of involvement of orbit preseptal space and ADC histogram parameter of ADC 10 are valuable in differential diagnosis of orbital lymphoma and IOIP.

  5. Geometric dimension model of virtual astronaut body for ergonomic analysis of man-machine space system

    NASA Astrophysics Data System (ADS)

    Qianxiang, Zhou

    2012-07-01

    It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.

  6. Building model analysis applications with the Joint Universal Parameter IdenTification and Evaluation of Reliability (JUPITER) API

    USGS Publications Warehouse

    Banta, E.R.; Hill, M.C.; Poeter, E.; Doherty, J.E.; Babendreier, J.

    2008-01-01

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input and output conventions allow application users to access various applications and the analysis methods they embody with a minimum of time and effort. Process models simulate, for example, physical, chemical, and (or) biological systems of interest using phenomenological, theoretical, or heuristic approaches. The types of model analyses supported by the JUPITER API include, but are not limited to, sensitivity analysis, data needs assessment, calibration, uncertainty analysis, model discrimination, and optimization. The advantages provided by the JUPITER API for users and programmers allow for rapid programming and testing of new ideas. Application-specific coding can be in languages other than the Fortran-90 of the API. This article briefly describes the capabilities and utility of the JUPITER API, lists existing applications, and uses UCODE_2005 as an example.

  7. Identification of Technological Parameters of Ni-Alloys When Machining by Monolithic Ceramic Milling Tool

    NASA Astrophysics Data System (ADS)

    Czán, Andrej; Kubala, Ondrej; Danis, Igor; Czánová, Tatiana; Holubják, Jozef; Mikloš, Matej

    2017-12-01

    The ever-increasing production and the usage of hard-to-machine progressive materials are the main cause of continual finding of new ways and methods of machining. One of these ways is the ceramic milling tool, which combines the pros of conventional ceramic cutting materials and pros of conventional coating steel-based insert. These properties allow to improve cutting conditions and so increase the productivity with preserved quality known from conventional tools usage. In this paper, there is made the identification of properties and possibilities of this tool when machining of hard-to-machine materials such as nickel alloys using in airplanes engines. This article is focused on the analysis and evaluation ordinary technological parameters and surface quality, mainly roughness of surface and quality of machined surface and tool wearing.

  8. Whole-tumor histogram analysis of the cerebral blood volume map: tumor volume defined by 11C-methionine positron emission tomography image improves the diagnostic accuracy of cerebral glioma grading.

    PubMed

    Wu, Rongli; Watanabe, Yoshiyuki; Arisawa, Atsuko; Takahashi, Hiroto; Tanaka, Hisashi; Fujimoto, Yasunori; Watabe, Tadashi; Isohashi, Kayako; Hatazawa, Jun; Tomiyama, Noriyuki

    2017-10-01

    This study aimed to compare the tumor volume definition using conventional magnetic resonance (MR) and 11C-methionine positron emission tomography (MET/PET) images in the differentiation of the pre-operative glioma grade by using whole-tumor histogram analysis of normalized cerebral blood volume (nCBV) maps. Thirty-four patients with histopathologically proven primary brain low-grade gliomas (n = 15) and high-grade gliomas (n = 19) underwent pre-operative or pre-biopsy MET/PET, fluid-attenuated inversion recovery, dynamic susceptibility contrast perfusion-weighted magnetic resonance imaging, and contrast-enhanced T1-weighted at 3.0 T. The histogram distribution derived from the nCBV maps was obtained by co-registering the whole tumor volume delineated on conventional MR or MET/PET images, and eight histogram parameters were assessed. The mean nCBV value had the highest AUC value (0.906) based on MET/PET images. Diagnostic accuracy significantly improved when the tumor volume was measured from MET/PET images compared with conventional MR images for the parameters of mean, 50th, and 75th percentile nCBV value (p = 0.0246, 0.0223, and 0.0150, respectively). Whole-tumor histogram analysis of CBV map provides more valuable histogram parameters and increases diagnostic accuracy in the differentiation of pre-operative cerebral gliomas when the tumor volume is derived from MET/PET images.

  9. Characterization and geographical discrimination of commercial Citrus spp. honeys produced in different Mediterranean countries based on minerals, volatile compounds and physicochemical parameters, using chemometrics.

    PubMed

    Karabagias, Ioannis K; Louppis, Artemis P; Karabournioti, Sofia; Kontakos, Stavros; Papastephanou, Chara; Kontominas, Michael G

    2017-02-15

    The objective of the present study was: i) to characterize Mediterranean citrus honeys based on conventional physicochemical parameter values, volatile compounds, and mineral content ii) to investigate the potential of above parameters to differentiate citrus honeys according to geographical origin using chemometrics. Thus, 37 citrus honey samples were collected during harvesting periods 2013 and 2014 from Greece, Egypt, Morocco, and Spain. Conventional physicochemical and CIELAB colour parameters were determined using official methods of analysis and the Commission Internationale de l' Eclairage recommendations, respectively. Minerals were determined using ICP-OES and volatiles using SPME-GC/MS. Results showed that honey samples analyzed, met the standard quality criteria set by the EU and were successfully classified according to geographical origin. Correct classification rates were 97.3% using 8 physicochemical parameter values, 86.5% using 15 volatile compound data and 83.8% using 13 minerals. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. One device, one equation: the simplest way to objectively evaluate psoriasis severity.

    PubMed

    Choi, Jae Woo; Kim, Bo Ri; Choi, Chong Won; Youn, Sang Woong

    2015-02-01

    The erythema, scale and thickness of psoriasis lesions could be converted to bioengineering parameters. An objective psoriasis severity assessment is advantageous in terms of accuracy and reproducibility over conventional severity assessment. We aimed to formulate an objective psoriasis severity index with a single bioengineering device that can possibly substitute the conventional subjective Psoriasis Severity Index. A linear regression analysis was performed to derive the formula with the subjective Psoriasis Severity Index as the dependent variable and various bioengineering parameters determined from 157 psoriasis lesions as independent variables. The construct validity of the objective Psoriasis Severity Index was evaluated with an additional 30 psoriasis lesions through a Pearson correlation analysis. The formula is composed of hue and brightness, which are sufficiently obtainable with a Colorimeter alone. A very strong positive correlation was found between the objective and subjective psoriasis severity indexes. The objective Psoriasis Severity Index is a novel, practical and valid assessment method that can substitute the conventional one. Combined with subjective area assessment, it could further replace the Psoriasis Area and Severity Index which is currently most popular. © 2014 Japanese Dermatological Association.

  11. Comparative Assessment of Cutting Inserts and Optimization during Hard Turning: Taguchi-Based Grey Relational Analysis

    NASA Astrophysics Data System (ADS)

    Venkata Subbaiah, K.; Raju, Ch.; Suresh, Ch.

    2017-08-01

    The present study aims to compare the conventional cutting inserts with wiper cutting inserts during the hard turning of AISI 4340 steel at different workpiece hardness. Type of insert, hardness, cutting speed, feed, and depth of cut are taken as process parameters. Taguchi’s L18 orthogonal array was used to conduct the experimental tests. Parametric analysis carried in order to know the influence of each process parameter on the three important Surface Roughness Characteristics (Ra, Rz, and Rt) and Material Removal Rate. Taguchi based Grey Relational Analysis (GRA) used to optimize the process parameters for individual response and multi-response outputs. Additionally, the analysis of variance (ANOVA) is also applied to identify the most significant factor.

  12. A Bayesian Nonparametric Meta-Analysis Model

    ERIC Educational Resources Information Center

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  13. Using a second‐order differential model to fit data without baselines in protein isothermal chemical denaturation

    PubMed Central

    Tang, Chuanning; Lew, Scott

    2016-01-01

    Abstract In vitro protein stability studies are commonly conducted via thermal or chemical denaturation/renaturation of protein. Conventional data analyses on the protein unfolding/(re)folding require well‐defined pre‐ and post‐transition baselines to evaluate Gibbs free‐energy change associated with the protein unfolding/(re)folding. This evaluation becomes problematic when there is insufficient data for determining the pre‐ or post‐transition baselines. In this study, fitting on such partial data obtained in protein chemical denaturation is established by introducing second‐order differential (SOD) analysis to overcome the limitations that the conventional fitting method has. By reducing numbers of the baseline‐related fitting parameters, the SOD analysis can successfully fit incomplete chemical denaturation data sets with high agreement to the conventional evaluation on the equivalent completed data, where the conventional fitting fails in analyzing them. This SOD fitting for the abbreviated isothermal chemical denaturation further fulfills data analysis methods on the insufficient data sets conducted in the two prevalent protein stability studies. PMID:26757366

  14. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  15. Scatterometer capabilities in remotely sensing geophysical parameters over the ocean: The status and the possibilities

    NASA Technical Reports Server (NTRS)

    Brown, R. A.

    1984-01-01

    Extensive comparison between surface measurements and satellite Scatt signal and predicted winds show successful wind and weather analysis comparable with conventional weather service analyses. However, in regions often of the most interest, e.g., fronts and local storms, inadequacies in the latter fields leaves an inability to establish the satellite sensor capabilities. Thus, comparisons must be made between wind detecting measurements and other satellite measurements of clouds, moisture, waves or any other parameter which responds to sharp gradients in the wind. At least for the windfields and the derived surface pressure field analysis, occasional surface measurements are required to anchor and monitor the satellite analyses. Their averaging times must be made compatible with the satellite sensor measurement. Careful attention must be paid to the complex fields which contain many scales of turbulence and coherent structures affecting the averaging process. The satellite microwave system is capable of replacing the conventional point observation/numerical analysis for the ocean weather.

  16. Operational modal analysis applied to the concert harp

    NASA Astrophysics Data System (ADS)

    Chomette, B.; Le Carrou, J.-L.

    2015-05-01

    Operational modal analysis (OMA) methods are useful to extract modal parameters of operating systems. These methods seem to be particularly interesting to investigate the modal basis of string instruments during operation to avoid certain disadvantages due to conventional methods. However, the excitation in the case of string instruments is not optimal for OMA due to the presence of damped harmonic components and low noise in the disturbance signal. Therefore, the present study investigates the least-square complex exponential (LSCE) and the modified least-square complex exponential methods in the case of a string instrument to identify modal parameters of the instrument when it is played. The efficiency of the approach is experimentally demonstrated on a concert harp excited by some of its strings and the two methods are compared to a conventional modal analysis. The results show that OMA allows us to identify modes particularly present in the instrument's response with a good estimation especially if they are close to the excitation frequency with the modified LSCE method.

  17. Evaluation of shear wave elastography for differential diagnosis of breast lesions: A new qualitative analysis versus conventional quantitative analysis.

    PubMed

    Ren, Wei-Wei; Li, Xiao-Long; Wang, Dan; Liu, Bo-Ji; Zhao, Chong-Ke; Xu, Hui-Xiong

    2018-04-13

    To evaluate a special kind of ultrasound (US) shear wave elastography for differential diagnosis of breast lesions, using a new qualitative analysis (i.e. the elasticity score in the travel time map) compared with conventional quantitative analysis. From June 2014 to July 2015, 266 pathologically proven breast lesions were enrolled in this study. The maximum, mean, median, minimum, and standard deviation of shear wave speed (SWS) values (m/s) were assessed. The elasticity score, a new qualitative feature, was evaluated in the travel time map. The area under the receiver operating characteristic (AUROC) curves were plotted to evaluate the diagnostic performance of both qualitative and quantitative analyses for differentiation of breast lesions. Among all quantitative parameters, SWS-max showed the highest AUROC (0.805; 95% CI: 0.752, 0.851) compared with SWS-mean (0.786; 95% CI:0.732, 0.834; P = 0.094), SWS-median (0.775; 95% CI:0.720, 0.824; P = 0.046), SWS-min (0.675; 95% CI:0.615, 0.731; P = 0.000), and SWS-SD (0.768; 95% CI:0.712, 0.817; P = 0.074). The AUROC of qualitative analysis in this study obtained the best diagnostic performance (0.871; 95% CI: 0.825, 0.909, compared with the best parameter of SWS-max in quantitative analysis, P = 0.011). The new qualitative analysis of shear wave travel time showed the superior diagnostic performance in the differentiation of breast lesions in comparison with conventional quantitative analysis.

  18. Left ventricular longitudinal strain in soccer referees.

    PubMed

    Gianturco, Luigi; Bodini, Bruno; Gianturco, Vincenzo; Lippo, Giuseppina; Solbiati, Agnese; Turiel, Maurizio

    2017-06-13

    Along the years, the analysis of soccer referees perfomance has interested the experts and we can find several types of studies in literature using in particular cardiac imaging. The aim of this retrospective study was to observe relationship between VO2max uptake and some conventional and not-conventional echocardiographic parameters. In order to perform this evaluation, we have enrolled 20 referees, belonging to Italian Soccer Referees' Association and we have investigated cardiovascular profile of them. We found a strong direct relationship between VO2max and global longitudinal strain of left ventricle assessed by means of speckle tracking echocardiographic analysis (R2=0.8464). The most common classic echocardiographic indexes have showed mild relations (respectively, VO2max vs EF: R2=0.4444; VO2max vs LV indexed mass: R2=0.2268). Therefore, our study suggests that longitudinal strain could be proposed as a specific echocardiographic parameter to evaluate the soccer referees performance.

  19. Detection of Local Tumor Recurrence After Definitive Treatment of Head and Neck Squamous Cell Carcinoma: Histogram Analysis of Dynamic Contrast-Enhanced T1-Weighted Perfusion MRI.

    PubMed

    Choi, Sang Hyun; Lee, Jeong Hyun; Choi, Young Jun; Park, Ji Eun; Sung, Yu Sub; Kim, Namkug; Baek, Jung Hwan

    2017-01-01

    This study aimed to explore the added value of histogram analysis of the ratio of initial to final 90-second time-signal intensity AUC (AUCR) for differentiating local tumor recurrence from contrast-enhancing scar on follow-up dynamic contrast-enhanced T1-weighted perfusion MRI of patients treated for head and neck squamous cell carcinoma (HNSCC). AUCR histogram parameters were assessed among tumor recurrence (n = 19) and contrast-enhancing scar (n = 27) at primary sites and compared using the t test. ROC analysis was used to determine the best differentiating parameters. The added value of AUCR histogram parameters was assessed when they were added to inconclusive conventional MRI results. Histogram analysis showed statistically significant differences in the 50th, 75th, and 90th percentiles of the AUCR values between the two groups (p < 0.05). The 90th percentile of the AUCR values (AUCR 90 ) was the best predictor of local tumor recurrence (AUC, 0.77; 95% CI, 0.64-0.91) with an estimated cutoff of 1.02. AUCR 90 increased sensitivity by 11.7% over that of conventional MRI alone when added to inconclusive results. Histogram analysis of AUCR can improve the diagnostic yield for local tumor recurrence during surveillance after treatment for HNSCC.

  20. Prediction of Welded Joint Strength in Plasma Arc Welding: A Comparative Study Using Back-Propagation and Radial Basis Neural Networks

    NASA Astrophysics Data System (ADS)

    Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.

    2016-09-01

    Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.

  1. The multiple complex exponential model and its application to EEG analysis

    NASA Astrophysics Data System (ADS)

    Chen, Dao-Mu; Petzold, J.

    The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.

  2. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  3. Surface roughness analysis after laser assisted machining of hard to cut materials

    NASA Astrophysics Data System (ADS)

    Przestacki, D.; Jankowiak, M.

    2014-03-01

    Metal matrix composites and Si3N4 ceramics are very attractive materials for various industry applications due to extremely high hardness and abrasive wear resistance. However because of these features they are problematic for the conventional turning process. The machining on a classic lathe still requires special polycrystalline diamond (PCD) or cubic boron nitride (CBN) cutting inserts which are very expensive. In the paper an experimental surface roughness analysis of laser assisted machining (LAM) for two tapes of hard-to-cut materials was presented. In LAM, the surface of work piece is heated directly by a laser beam in order to facilitate, the decohesion of material. Surface analysis concentrates on the influence of laser assisted machining on the surface quality of the silicon nitride ceramic Si3N4 and metal matrix composite (MMC). The effect of the laser assisted machining was compared to the conventional machining. The machining parameters influence on surface roughness parameters was also investigated. The 3D surface topographies were measured using optical surface profiler. The analysis of power spectrum density (PSD) roughness profile were analyzed.

  4. Correlation and agreement of a digital and conventional method to measure arch parameters.

    PubMed

    Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika

    2018-01-01

    The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p < 0.01) between all digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.

  5. The effect of loudness on the reverberance of music: reverberance prediction using loudness models.

    PubMed

    Lee, Doheon; Cabrera, Densil; Martens, William L

    2012-02-01

    This study examines the auditory attribute that describes the perceived amount of reverberation, known as "reverberance." Listening experiments were performed using two signals commonly heard in auditoria: excerpts of orchestral music and western classical singing. Listeners adjusted the decay rate of room impulse responses prior to convolution with these signals, so as to match the reverberance of each stimulus to that of a reference stimulus. The analysis examines the hypothesis that reverberance is related to the loudness decay rate of the underlying room impulse response. This hypothesis is tested using computational models of time varying or dynamic loudness, from which parameters analogous to conventional reverberation parameters (early decay time and reverberation time) are derived. The results show that listening level significantly affects reverberance, and that the loudness-based parameters outperform related conventional parameters. Results support the proposed relationship between reverberance and the computationally predicted loudness decay function of sound in rooms. © 2012 Acoustical Society of America

  6. Vector network analyzer ferromagnetic resonance spectrometer with field differential detection

    NASA Astrophysics Data System (ADS)

    Tamaru, S.; Tsunegi, S.; Kubota, H.; Yuasa, S.

    2018-05-01

    This work presents a vector network analyzer ferromagnetic resonance (VNA-FMR) spectrometer with field differential detection. This technique differentiates the S-parameter by applying a small binary modulation field in addition to the DC bias field to the sample. By setting the modulation frequency sufficiently high, slow sensitivity fluctuations of the VNA, i.e., low-frequency components of the trace noise, which limit the signal-to-noise ratio of the conventional VNA-FMR spectrometer, can be effectively removed, resulting in a very clean FMR signal. This paper presents the details of the hardware implementation and measurement sequence as well as the data processing and analysis algorithms tailored for the FMR spectrum obtained with this technique. Because the VNA measures a complex S-parameter, it is possible to estimate the Gilbert damping parameter from the slope of the phase variation of the S-parameter with respect to the bias field. We show that this algorithm is more robust against noise than the conventional algorithm based on the linewidth.

  7. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades

    PubMed Central

    Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd

    2017-01-01

    The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter. PMID:28813566

  8. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades.

    PubMed

    Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd

    2017-08-01

    The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.

  9. Feasibility of free-breathing dynamic contrast-enhanced MRI of gastric cancer using a golden-angle radial stack-of-stars VIBE sequence: comparison with the conventional contrast-enhanced breath-hold 3D VIBE sequence.

    PubMed

    Li, Huan-Huan; Zhu, Hui; Yue, Lei; Fu, Yi; Grimm, Robert; Stemmer, Alto; Fu, Cai-Xia; Peng, Wei-Jun

    2018-05-01

    To investigate the feasibility and diagnostic value of free-breathing, radial, stack-of-stars three-dimensional (3D) gradient echo (GRE) sequence ("golden angle") on dynamic contrast-enhanced (DCE) MRI of gastric cancer. Forty-three gastric cancer patients were divided into cooperative and uncooperative groups. Respiratory fluctuation was observed using an abdominal respiratory gating sensor. Those who breath-held for more than 15 s were placed in the cooperative group and the remainder in the uncooperative group. The 3-T MRI scanning protocol included 3D GRE and conventional breath-hold VIBE (volume-interpolated breath-hold examination) sequences, comparing images quantitatively and qualitatively. DCE-MRI parameters from VIBE images of normal gastric wall and malignant lesions were compared. For uncooperative patients, 3D GRE scored higher qualitatively, and had higher SNRs (signal-to-noise ratios) and CNRs (contrast-to-noise ratios) than conventional VIBE quantitatively. Though 3D GRE images scored lower in qualitative parameters compared with conventional VIBE for cooperative patients, it provided images with fewer artefacts. DCE parameters differed significantly between normal gastric wall and lesions, with higher Ve (extracellular volume) and lower Kep (reflux constant) in gastric cancer. The free-breathing, golden-angle, radial stack-of-stars 3D GRE technique is feasible for DCE-MRI of gastric cancer. Dynamic enhanced images can be used for quantitative analysis of this malignancy. • Golden-angle radial stack-of-stars VIBE aids gastric cancer MRI diagnosis. • The 3D GRE technique is suitable for patients unable to suspend respiration. • Method scored higher in the qualitative evaluation for uncooperative patients. • The technique produced images with fewer artefacts than conventional VIBE sequence. • Dynamic enhanced images can be used for quantitative analysis of gastric cancer.

  10. A generalized analysis of solar space heating

    NASA Astrophysics Data System (ADS)

    Clark, J. A.

    A life-cycle model is developed for solar space heating within the United States. The model consists of an analytical relationship among five dimensionless parameters that include all pertinent technical, climatological, solar, operating and economic factors that influence the performance of a solar space heating system. An important optimum condition presented is the break-even metered cost of conventional fuel at which the cost of the solar system is equal to that of a conventional heating system. The effect of Federal (1980) and State (1979) income tax credits on these costs is determined. A parameter that includes both solar availability and solar system utilization is derived and plotted on a map of the U.S. This parameter shows the most favorable present locations for solar space heating application to be in the Central and Mountain States. The data employed are related to the rehabilitated solar data recently made available by the National Climatic Center.

  11. Integrated risk assessment and screening analysis of drinking water safety of a conventional water supply system.

    PubMed

    Sun, F; Chen, J; Tong, Q; Zeng, S

    2007-01-01

    Management of drinking water safety is changing towards an integrated risk assessment and risk management approach that includes all processes in a water supply system from catchment to consumers. However, given the large number of water supply systems in China and the cost of implementing such a risk assessment procedure, there is a necessity to first conduct a strategic screening analysis at a national level. An integrated methodology of risk assessment and screening analysis is thus proposed to evaluate drinking water safety of a conventional water supply system. The violation probability, indicating drinking water safety, is estimated at different locations of a water supply system in terms of permanganate index, ammonia nitrogen, turbidity, residual chlorine and trihalomethanes. Critical parameters with respect to drinking water safety are then identified, based on which an index system is developed to prioritize conventional water supply systems in implementing a detailed risk assessment procedure. The evaluation results are represented as graphic check matrices for the concerned hazards in drinking water, from which the vulnerability of a conventional water supply system is characterized.

  12. Partially Turboelectric Aircraft Drive Key Performance Parameters

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph H.; Duffy, Kirsten P.; Brown, Gerald V.

    2017-01-01

    The purpose of this paper is to propose electric drive specific power, electric drive efficiency, and electrical propulsion fraction as the key performance parameters for a partially turboelectric aircraft power system and to investigate their impact on the overall aircraft performance. Breguet range equations for a base conventional turbofan aircraft and a partially turboelectric aircraft are found. The benefits and costs that may result from the partially turboelectric system are enumerated. A break even analysis is conducted to find the minimum allowable electric drive specific power and efficiency, for a given electrical propulsion fraction, that can preserve the range, fuel weight, operating empty weight, and payload weight of the conventional aircraft. Current and future power system performance is compared to the required performance to determine the potential benefit.

  13. Life Support Filtration System Trade Study for Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Agui, Juan H.; Perry, Jay L.

    2017-01-01

    The National Aeronautics and Space Administrations (NASA) technical developments for highly reliable life support systems aim to maximize the viability of long duration deep space missions. Among the life support system functions, airborne particulate matter filtration is a significant driver of launch mass because of the large geometry required to provide adequate filtration performance and because of the number of replacement filters needed to a sustain a mission. A trade analysis incorporating various launch, operational and maintenance parameters was conducted to investigate the trade-offs between the various particulate matter filtration configurations. In addition to typical launch parameters such as mass, volume and power, the amount of crew time dedicated to system maintenance becomes an increasingly crucial factor for long duration missions. The trade analysis evaluated these parameters for conventional particulate matter filtration technologies and a new multi-stage particulate matter filtration system under development by NASAs Glenn Research Center. The multi-stage filtration system features modular components that allow for physical configuration flexibility. Specifically, the filtration system components can be configured in distributed, centralized, and hybrid physical layouts that can result in considerable mass savings compared to conventional particulate matter filtration technologies. The trade analysis results are presented and implications for future transit and surface missions are discussed.

  14. Classification of arterial and venous cerebral vasculature based on wavelet postprocessing of CT perfusion data.

    PubMed

    Havla, Lukas; Schneider, Moritz J; Thierfelder, Kolja M; Beyer, Sebastian E; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H; Dietrich, Olaf

    2016-02-01

    The purpose of this study was to propose and evaluate a new wavelet-based technique for classification of arterial and venous vessels using time-resolved cerebral CT perfusion data sets. Fourteen consecutive patients (mean age 73 yr, range 17-97) with suspected stroke but no pathology in follow-up MRI were included. A CT perfusion scan with 32 dynamic phases was performed during intravenous bolus contrast-agent application. After rigid-body motion correction, a Paul wavelet (order 1) was used to calculate voxelwise the wavelet power spectrum (WPS) of each attenuation-time course. The angiographic intensity A was defined as the maximum of the WPS, located at the coordinates T (time axis) and W (scale/width axis) within the WPS. Using these three parameters (A, T, W) separately as well as combined by (1) Fisher's linear discriminant analysis (FLDA), (2) logistic regression (LogR) analysis, or (3) support vector machine (SVM) analysis, their potential to classify 18 different arterial and venous vessel segments per subject was evaluated. The best vessel classification was obtained using all three parameters A and T and W [area under the curve (AUC): 0.953 with FLDA and 0.957 with LogR or SVM]. In direct comparison, the wavelet-derived parameters provided performance at least equal to conventional attenuation-time-course parameters. The maximum AUC obtained from the proposed wavelet parameters was slightly (although not statistically significantly) higher than the maximum AUC (0.945) obtained from the conventional parameters. A new method to classify arterial and venous cerebral vessels with high statistical accuracy was introduced based on the time-domain wavelet transform of dynamic CT perfusion data in combination with linear or nonlinear multidimensional classification techniques.

  15. Measurement and analysis of thrust force in drilling sisal-glass fiber reinforced polymer composites

    NASA Astrophysics Data System (ADS)

    Ramesh, M.; Gopinath, A.

    2017-05-01

    Drilling of composite materials is difficult when compared to the conventional materials because of its in-homogeneous nature. The force developed during drilling play a major role in the surface quality of the hole and minimizing the damages around the surface. This paper focuses the effect of drilling parameters on thrust force in drilling of sisal-glass fiber reinforced polymer composite laminates. The quadratic response models are developed by using response surface methodology (RSM) to predict the influence of cutting parameters on thrust force. The adequacy of the models is checked by using the analysis of variance (ANOVA). A scanning electron microscope (SEM) analysis is carried out to analyze the quality of the drilled surface. From the results, it is found that, the feed rate is the most influencing parameter followed by spindle speed and the drill diameter is the least influencing parameter on the thrust force.

  16. Life-cycle analysis of shale gas and natural gas.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, C.E.; Han, J.; Burnham, A.

    2012-01-27

    The technologies and practices that have enabled the recent boom in shale gas production have also brought attention to the environmental impacts of its use. Using the current state of knowledge of the recovery, processing, and distribution of shale gas and conventional natural gas, we have estimated up-to-date, life-cycle greenhouse gas emissions. In addition, we have developed distribution functions for key parameters in each pathway to examine uncertainty and identify data gaps - such as methane emissions from shale gas well completions and conventional natural gas liquid unloadings - that need to be addressed further. Our base case results showmore » that shale gas life-cycle emissions are 6% lower than those of conventional natural gas. However, the range in values for shale and conventional gas overlap, so there is a statistical uncertainty regarding whether shale gas emissions are indeed lower than conventional gas emissions. This life-cycle analysis provides insight into the critical stages in the natural gas industry where emissions occur and where opportunities exist to reduce the greenhouse gas footprint of natural gas.« less

  17. Modeling of rheological characteristics of the fermented dairy products obtained by novel and traditional starter cultures.

    PubMed

    Vukić, Dajana V; Vukić, Vladimir R; Milanović, Spasenija D; Ilicić, Mirela D; Kanurić, Katarina G

    2018-06-01

    Tree different fermented dairy products obtained by conventional and non-conventional starter cultures were investigated in this paper. Textural and rheological characteristics as well as chemical composition during 21 days of storage were analysed and subsequent data processing was performed by principal component analysis. The analysis of samples` flow behaviour was focused on their time dependent properties. Parameters of Power law model described flow behaviour of samples depended on used starter culture and days of storage. The Power law model was applied successfully to describe the flow of the fermented milk, which had characteristics of shear thinning and non-Newtonian fluid behaviour.

  18. Parametric study of closed wet cooling tower thermal performance

    NASA Astrophysics Data System (ADS)

    Qasim, S. M.; Hayder, M. J.

    2017-08-01

    The present study involves experimental and theoretical analysis to evaluate the thermal performance of modified Closed Wet Cooling Tower (CWCT). The experimental study includes: design, manufacture and testing prototype of a modified counter flow forced draft CWCT. The modification based on addition packing to the conventional CWCT. A series of experiments was carried out at different operational parameters. In view of energy analysis, the thermal performance parameters of the tower are: cooling range, tower approach, cooling capacity, thermal efficiency, heat and mass transfer coefficients. The theoretical study included develops Artificial Neural Network (ANN) models to predicting various thermal performance parameters of the tower. Utilizing experimental data for training and testing, the models simulated by multi-layer back propagation algorithm for varying all operational parameters stated in experimental test.

  19. Added Value of Assessing Adnexal Masses with Advanced MRI Techniques

    PubMed Central

    Thomassin-Naggara, I.; Balvay, D.; Rockall, A.; Carette, M. F.; Ballester, M.; Darai, E.; Bazot, M.

    2015-01-01

    This review will present the added value of perfusion and diffusion MR sequences to characterize adnexal masses. These two functional MR techniques are readily available in routine clinical practice. We will describe the acquisition parameters and a method of analysis to optimize their added value compared with conventional images. We will then propose a model of interpretation that combines the anatomical and morphological information from conventional MRI sequences with the functional information provided by perfusion and diffusion weighted sequences. PMID:26413542

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeff; Burton, Evan

    This study evaluates the costs and benefits associated with the use of a plug-in hybrid electric bus and determines the cost effectiveness relative to a conventional bus and a hybrid electric bus. A sensitivity sweep analysis was performed over a number of a different battery sizes, charging powers, and charging stations. The net present value was calculated for each vehicle design and provided the basis for the design evaluation. In all cases, given present day economic assumptions, the conventional bus achieved the lowest net present value while the optimal plug-in hybrid electric bus scenario reached lower lifetime costs than themore » hybrid electric bus. The study also performed parameter sensitivity analysis under low market potential assumptions and high market potential assumptions. The net present value of plug-in hybrid electric bus is close to that of conventional bus.« less

  1. Four Dimensional Analysis of Free Electron Lasers in the Amplifier Configuration

    DTIC Science & Technology

    2007-12-01

    FEL. The power capability of this device was so much greater than that of conventional klystrons and magnetrons that records for peak power ...understand the four dimensional behavior of the high power FEL amplifier. The simulation program required dimensionless input parameters, which make...33 OPTICAL PARAMETERS inP Seed laser power inT Seed pulse duration S Distance to First Optic 0Z Rayleigh length 2 0 0 WZ π λ= λ

  2. Effects of Process Parameters and Cryotreated Electrode on the Radial Overcut of Aisi 304 IN SiC Powder Mixed Edm

    NASA Astrophysics Data System (ADS)

    Bhaumik, Munmun; Maity, Kalipada

    Powder mixed electro discharge machining (PMEDM) is further advancement of conventional electro discharge machining (EDM) where the powder particles are suspended in the dielectric medium to enhance the machining rate as well as surface finish. Cryogenic treatment is introduced in this process for improving the tool life and cutting tool properties. In the present investigation, the characterization of the cryotreated tempered electrode was performed. An attempt has been made to study the effect of cryotreated double tempered electrode on the radial overcut (ROC) when SiC powder is mixed in the kerosene dielectric during electro discharge machining of AISI 304. The process performance has been evaluated by means of ROC when peak current, pulse on time, gap voltage, duty cycle and powder concentration are considered as process parameters and machining is performed by using tungsten carbide electrodes (untreated and double tempered electrodes). A regression analysis was performed to correlate the data between the response and the process parameters. Microstructural analysis was carried out on the machined surfaces. Least radial overcut was observed for conventional EDM as compared to powder mixed EDM. Cryotreated double tempered electrode significantly reduced the radial overcut than untreated electrode.

  3. Diagnostic performances of shear wave elastography: which parameter to use in differential diagnosis of solid breast masses?

    PubMed

    Lee, Eun Jung; Jung, Hae Kyoung; Ko, Kyung Hee; Lee, Jong Tae; Yoon, Jung Hyun

    2013-07-01

    To evaluate which shear wave elastography (SWE) parameter proves most accurate in the differential diagnosis of solid breast masses. One hundred and fifty-six breast lesions in 139 consecutive women (mean age: 43.54 ± 9.94 years, range 21-88 years), who had been scheduled for ultrasound-guided breast biopsy, were included. Conventional ultrasound and SWE were performed in all women before biopsy procedures. Ultrasound BI-RADS final assessment and SWE parameters were recorded. Diagnostic performance of each SWE parameter was calculated and compared with those obtained when applying cut-off values of previously published data. Performance of conventional ultrasound and ultrasound combined with each parameter was also compared. Of the 156 breast masses, 120 (76.9 %) were benign and 36 (23.1 %) malignant. Maximum stiffness (Emax) with a cut-off of 82.3 kPa had the highest area under the receiver operating characteristics curve (Az) value compared with other SWE parameters, 0.860 (sensitivity 88.9 %, specificity 77.5 %, accuracy 80.1 %). Az values of conventional ultrasound combined with each SWE parameter showed lower (but not significantly) values than with conventional ultrasound alone. Maximum stiffness (82.3 kPa) provided the best diagnostic performance. However the overall diagnostic performance of ultrasound plus SWE was not significantly better than that of conventional ultrasound alone. • SWE offers new information over and above conventional breast ultrasound • Various SWE parameters were explored regarding distinction between benign and malignant lesions • An elasticity of 82.3 kPa appears optimal in differentiating solid breast masses • However, ultrasound plus SWE was not significantly better than conventional ultrasound alone.

  4. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  5. Sensitivity Challenge of Steep Transistors

    NASA Astrophysics Data System (ADS)

    Ilatikhameneh, Hesameddin; Ameen, Tarek A.; Chen, ChinYi; Klimeck, Gerhard; Rahman, Rajib

    2018-04-01

    Steep transistors are crucial in lowering power consumption of the integrated circuits. However, the difficulties in achieving steepness beyond the Boltzmann limit experimentally have hindered the fundamental challenges in application of these devices in integrated circuits. From a sensitivity perspective, an ideal switch should have a high sensitivity to the gate voltage and lower sensitivity to the device design parameters like oxide and body thicknesses. In this work, conventional tunnel-FET (TFET) and negative capacitance FET are shown to suffer from high sensitivity to device design parameters using full-band atomistic quantum transport simulations and analytical analysis. Although Dielectric Engineered (DE-) TFETs based on 2D materials show smaller sensitivity compared with the conventional TFETs, they have leakage issue. To mitigate this challenge, a novel DE-TFET design has been proposed and studied.

  6. Role of MRI in osteosarcoma for evaluation and prediction of chemotherapy response: correlation with histological necrosis.

    PubMed

    Bajpai, Jyoti; Gamnagatti, Shivanand; Kumar, Rakesh; Sreenivas, Vishnubhatla; Sharma, Mehar Chand; Khan, Shah Alam; Rastogi, Shishir; Malhotra, Arun; Safaya, Rajni; Bakhshi, Sameer

    2011-04-01

    Histological necrosis, the current standard for response evaluation in osteosarcoma, is attainable after neoadjuvant chemotherapy. To establish the role of surrogate markers of response prediction and evaluation using MRI in the early phases of the disease. Thirty-one treatment-naïve osteosarcoma patients received three cycles of neoadjuvant chemotherapy followed by surgery during 2006-2008. All patients underwent baseline and post-chemotherapy conventional, diffusion-weighted and dynamic contrast-enhanced MRI. Taking histological response (good response ≥90% necrosis) as the reference standard, various parameters of MRI were compared to it. A tumor was considered ellipsoidal; volume, average tumor plane and its relative value (average tumor plane relative/body surface area) was calculated using the standard formula for ellipse. Receiver operating characteristic curves were generated to assess best threshold and predictability. After deriving thresholds for each parameter in univariable analysis, multivariable analysis was carried out. Both pre-and post-chemotherapy absolute and relative-size parameters correlated well with necrosis. Apparent diffusion coefficient did not correlate with necrosis; however, on adjusting for volume, significant correlation was found. Thus, we could derive a new parameter: diffusion per unit volume. In osteosarcoma, chemotherapy response can be predicted and evaluated by conventional and diffusion-weighted MRI early in the disease course and it correlates well with necrosis. Further, newly derived parameter diffusion per unit volume appears to be a sensitive substitute for response evaluation in osteosarcoma.

  7. Gait analysis following treadmill training with body weight support versus conventional physical therapy: a prospective randomized controlled single blind study.

    PubMed

    Lucareli, P R; Lima, M O; Lima, F P S; de Almeida, J G; Brech, G C; D'Andréa Greve, J M

    2011-09-01

    Single-blind randomized, controlled clinical study. To evaluate, using kinematic gait analysis, the results obtained from gait training on a treadmill with body weight support versus those obtained with conventional gait training and physiotherapy. Thirty patients with sequelae from traumatic incomplete spinal cord injuries at least 12 months earlier; patients were able to walk and were classified according to motor function as ASIA (American Spinal Injury Association) impairment scale C or D. Patients were divided randomly into two groups of 15 patients by the drawing of opaque envelopes: group A (weight support) and group B (conventional). After an initial assessment, both groups underwent 30 sessions of gait training. Sessions occurred twice a week, lasted for 30 min each and continued for four months. All of the patients were evaluated by a single blinded examiner using movement analysis to measure angular and linear kinematic gait parameters. Six patients (three from group A and three from group B) were excluded because they attended fewer than 85% of the training sessions. There were no statistically significant differences in intra-group comparisons among the spatial-temporal variables in group B. In group A, the following significant differences in the studied spatial-temporal variables were observed: increases in velocity, distance, cadence, step length, swing phase and gait cycle duration, in addition to a reduction in stance phase. There were also no significant differences in intra-group comparisons among the angular variables in group B. However, group A achieved significant improvements in maximum hip extension and plantar flexion during stance. Gait training with body weight support was more effective than conventional physiotherapy for improving the spatial-temporal and kinematic gait parameters among patients with incomplete spinal cord injuries.

  8. Longitudinal analysis of the strengths and difficulties questionnaire scores of the Millennium Cohort Study children in England using M-quantile random-effects regression.

    PubMed

    Tzavidis, Nikos; Salvati, Nicola; Schmid, Timo; Flouri, Eirini; Midouhas, Emily

    2016-02-01

    Multilevel modelling is a popular approach for longitudinal data analysis. Statistical models conventionally target a parameter at the centre of a distribution. However, when the distribution of the data is asymmetric, modelling other location parameters, e.g. percentiles, may be more informative. We present a new approach, M -quantile random-effects regression, for modelling multilevel data. The proposed method is used for modelling location parameters of the distribution of the strengths and difficulties questionnaire scores of children in England who participate in the Millennium Cohort Study. Quantile mixed models are also considered. The analyses offer insights to child psychologists about the differential effects of risk factors on children's outcomes.

  9. System parameter identification from projection of inverse analysis

    NASA Astrophysics Data System (ADS)

    Liu, K.; Law, S. S.; Zhu, X. Q.

    2017-05-01

    The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeff; Burton, Evan

    This study evaluates the costs and benefits associated with the use of a stationary-wireless- power-transfer-enabled plug-in hybrid electric bus and determines the cost effectiveness relative to a conventional bus and a hybrid electric bus. A sensitivity sweep was performed over many different battery sizes, charging power levels, and number/location of bus stop charging stations. The net present cost was calculated for each vehicle design and provided the basis for design evaluation. In all cases, given the assumed economic conditions, the conventional bus achieved the lowest net present cost while the optimal plug-in hybrid electric bus scenario beat out the hybridmore » electric comparison scenario. The study also performed parameter sensitivity analysis under favorable and high unfavorable market penetration assumptions. The analysis identifies fuel saving opportunities with plug-in hybrid electric bus scenarios at cumulative net present costs not too dissimilar from those for conventional buses.« less

  11. Parameter optimization of fusion splicing of photonic crystal fibers and conventional fibers to increase strength

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxi; Zhang, Zuchen; Song, Jingming; Wu, Chunxiao; Song, Ningfang

    2015-03-01

    A splicing parameter optimization method to increase the tensile strength of splicing joint between photonic crystal fiber (PCF) and conventional fiber is demonstrated. Based on the splicing recipes provided by splicer or fiber manufacturers, the optimal values of some major splicing parameters are obtained in sequence, and a conspicuous improvement in the mechanical strength of splicing joints between PCFs and conventional fibers is validated through experiments.

  12. Preconstruction Biogeochemical Analysis of Mercury in Wetlands Bordering the Hamilton Army Airfield (HAAF) Wetlands Restoration Site. Part 3

    DTIC Science & Technology

    2009-12-01

    ER D C/ EL T R- 09 -2 1 Preconstruction Biogeochemical Analysis of Mercury in Wetlands Bordering the Hamilton Army Airfield (HAAF) Wetlands...Preconstruction Biogeochemical Analysis of Mercury in Wetlands Bordering the Hamilton Army Airfield (HAAF) Wetlands Restoration Site Part 3 Elly P. H... mercury methylation and demethylation, and biogeochemical parameters related to the mercury cycle as measured by both conventional and emerging methods

  13. Non-Conventional Applications of Computerized Tomography: Analysis of Solid Dosage Forms Produced by Pharmaceutical Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martins de Oliveira, Jose Jr.; Germano Martins, Antonio Cesar

    X-ray computed tomography (CT) refers to the cross-sectional imaging of an object measuring the transmitted radiation at different directions. In this work, we describe a non-conventional application of computerized tomography: visualization and improvements in the understanding of some internal structural features of solid dosage forms. A micro-CT X-ray scanner, with a minimum resolution of 30 mum was used to characterize some pharmaceutical tablets, granules, controlled-release osmotic tablet and liquid-filled soft-gelatin capsules. The analysis presented in this work are essentially qualitative, but quantitative parameters, such as porosity, density distribution, tablets dimensions, etc. could also be obtained using the related CT techniques.

  14. Experiences with Probabilistic Analysis Applied to Controlled Systems

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Giesy, Daniel P.

    2004-01-01

    This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.

  15. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing RAZ scoring is a simple, accurate and inexpensive screening technique for cardiomyopathy. Although HF QRS ECG is highly sensitive for cardiomyopathy, its specificity may be compromised in patients with cardiac pathologies other than cardiomyopathy, such as uncomplicated coronary artery disease or multiple coronary disease risk factors. Further studies are required to determine whether HF QRS might be useful for monitoring cardiomyopathy severity or the efficacy of therapy in a longitudinal fashion.

  16. A generalized analysis of solar space heating in the United States

    NASA Astrophysics Data System (ADS)

    Clark, J. A.

    A life-cycle model is developed for solar space heating within the United States that is based on the solar design data from the Los Alamos Scientific Laboratory. The model consists of an analytical relationship among five dimensionless parameters that include all pertinent technical, climatological, solar, operating and economic factors that influence the performance of a Solar Space Heating System. An important optimum condition presented is the 'Breakeven' metered cost of conventional fuel at which the cost of the solar system is equal to that of a conventional heating system. The effect of Federal (1980) and State (1979) income tax credits on these costs is determined. A parameter that includes both solar availability and solar system utilization is derived and plotted on a map of the U.S. This parameter shows the most favorable present locations for solar space heating application to be in the Central and Mountain States. The data employed are related to the rehabilitated solar data recently made available by the National Climatic Center (SOLMET).

  17. Ocular Biocompatibility of Nitinol Intraocular Clips

    PubMed Central

    Velez-Montoya, Raul; Erlanger, Michael

    2012-01-01

    Purpose. To evaluate the tolerance and biocompatibility of a preformed nitinol intraocular clip in an animal model after anterior segment surgery. Methods. Yucatan mini-pigs were used. A 30-gauge prototype injector was used to attach a shape memory nitinol clip to the iris of five pigs. Another five eyes received conventional polypropylene suture with a modified Seipser slip knot. The authors compared the surgical time of each technique. All eyes underwent standard full-field electroretinogram at baseline and 8 weeks after surgery. The animals were euthanized and eyes collected for histologic analysis after 70 days (10 weeks) postsurgery. The corneal thickness, corneal endothelial cell counts, specular microscopy parameters, retina cell counts, and electroretinogram parameters were compared between the groups. A two sample t-test for means and a P value of 0.05 were use for assessing statistical differences between measurements. Results. The injection of the nitinol clip was 15 times faster than conventional suturing. There were no statistical differences between the groups for corneal thickness, endothelial cell counts, specular microscopy parameters, retina cell counts, and electroretinogram measurements. Conclusions. The nitinol clip prototype is well tolerated and showed no evidence of toxicity in the short-term. The injectable delivery system was faster and technically less challenging than conventional suture techniques. PMID:22064995

  18. Parameter analysis of a photonic crystal fiber with raised-core index profile based on effective index method

    NASA Astrophysics Data System (ADS)

    Seraji, Faramarz E.; Rashidi, Mahnaz; Khasheie, Vajieh

    2006-08-01

    Photonic crystal fibers (PCFs) with a stepped raised-core profile and one layer equally spaced holes in the cladding are analyzed. Using effective index method and considering a raised step refractive index difference between the index of the core and the effective index of the cladding, we improve the characteristic parameters such as numerical aperture and V-parameter, and reduce its bending loss to about one tenth of a conventional PCF. Implementing such a structure in PCFs may be one step forward to achieve low loss PCFs for communication applications.

  19. Study of Glycemic Variability Through Time Series Analyses (Detrended Fluctuation Analysis and Poincaré Plot) in Children and Adolescents with Type 1 Diabetes.

    PubMed

    García Maset, Leonor; González, Lidia Blasco; Furquet, Gonzalo Llop; Suay, Francisco Montes; Marco, Roberto Hernández

    2016-11-01

    Time series analysis provides information on blood glucose dynamics that is unattainable with conventional glycemic variability (GV) indices. To date, no studies have been published on these parameters in pediatric patients with type 1 diabetes. Our aim is to evaluate the relationship between time series analysis and conventional GV indices, and glycosylated hemoglobin (HbA1c) levels. This is a transversal study of 41 children and adolescents with type 1 diabetes. Glucose monitoring was carried out continuously for 72 h to study the following GV indices: standard deviation (SD) of glucose levels (mg/dL), coefficient of variation (%), interquartile range (IQR; mg/dL), mean amplitude of the largest glycemic excursions (MAGE), and continuous overlapping net glycemic action (CONGA). The time series analysis was conducted by means of detrended fluctuation analysis (DFA) and Poincaré plot. Time series parameters (DFA alpha coefficient and elements of the ellipse of the Poincaré plot) correlated well with the more conventional GV indices. Patients were grouped according to the terciles of these indices, to the terciles of eccentricity (1: 12.56-16.98, 2: 16.99-21.91, 3: 21.92-41.03), and to the value of the DFA alpha coefficient (> or ≤1.5). No differences were observed in the HbA1c of patients grouped by GV index criteria; however, significant differences were found in patients grouped by alpha coefficient and eccentricity, not only in terms of HbA1c, but also in SD glucose, IQR, and CONGA index. The loss of complexity in glycemic homeostasis is accompanied by an increase in variability.

  20. Corneal Collagen Cross-Linking in the Management of Keratoconus in Canada: A Cost-Effectiveness Analysis.

    PubMed

    Leung, Victoria C; Pechlivanoglou, Petros; Chew, Hall F; Hatch, Wendy

    2017-08-01

    To use patient-level microsimulation models to evaluate the comparative cost-effectiveness of early corneal cross-linking (CXL) and conventional management with penetrating keratoplasty (PKP) when indicated in managing keratoconus in Canada. Cost-utility analysis using individual-based, state-transition microsimulation models. Simulated cohorts of 100 000 individuals with keratoconus who entered each treatment arm at 25 years of age. Fellow eyes were modeled separately. Simulated individuals lived up to a maximum of 110 years. We developed 2 state-transition microsimulation models to reflect the natural history of keratoconus progression and the impact of conventional management with PKP versus CXL. We collected data from the published literature to inform model parameters. We used realistic parameters that maximized the potential costs and complications of CXL, while minimizing those associated with PKP. In each treatment arm, we allowed simulated individuals to move through health states in monthly cycles from diagnosis until death. For each treatment strategy, we calculated the total cost and number of quality-adjusted life years (QALYs) gained. Costs were measured in Canadian dollars. Costs and QALYs were discounted at 5%, converting future costs and QALYs into present values. We used an incremental cost-effectiveness ratio (ICER = difference in lifetime costs/difference in lifetime health outcomes) to compare the cost-effectiveness of CXL versus conventional management with PKP. Lifetime costs and QALYs for CXL were estimated to be Can$5530 (Can$4512, discounted) and 50.12 QALYs (16.42 QALYs, discounted). Lifetime costs and QALYs for conventional management with PKP were Can$2675 (Can$1508, discounted) and 48.93 QALYs (16.09 QALYs, discounted). The discounted ICER comparing CXL to conventional management was Can$9090/QALY gained. Sensitivity analyses revealed that in general, parameter variations did not influence the cost-effectiveness of CXL. CXL is cost-effective compared with conventional management with PKP in the treatment of keratoconus. Our ICER of Can$9090/QALY falls well below the range of Can$20 000 to Can$100 000/QALY and below US$50 000/QALY, thresholds generally used to evaluate the cost-effectiveness of health interventions in Canada and the United States. This study provides strong economic evidence for the cost-effectiveness of early CXL in keratoconus. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  1. Development of ocular viscosity characterization method.

    PubMed

    Shu-Hao Lu; Guo-Zhen Chen; Leung, Stanley Y Y; Lam, David C C

    2016-08-01

    Glaucoma is the second leading cause for blindness. Irreversible and progressive optic nerve damage results when the intraocular pressure (IOP) exceeds 21 mmHg. The elevated IOP is attributed to blocked fluid drainage from the eye. Methods to measure the IOP are widely available, but methods to measure the viscous response to blocked drainage has yet been developed. An indentation method to characterize the ocular flow is developed in this study. Analysis of the load-relaxation data from indentation tests on drainage-controlled porcine eyes showed that the blocked drainage is correlated with increases in ocular viscosity. Successful correlation of the ocular viscosity with drainage suggests that ocular viscosity maybe further developed as a new diagnostic parameter for assessment of normal tension glaucoma where nerve damage occurs without noticeable IOP elevation; and as a diagnostic parameter complimentary to conventional IOP in conventional diagnosis.

  2. Fitting Higgs data with nonlinear effective theory.

    PubMed

    Buchalla, G; Catà, O; Celis, A; Krause, C

    2016-01-01

    In a recent paper we showed that the electroweak chiral Lagrangian at leading order is equivalent to the conventional [Formula: see text] formalism used by ATLAS and CMS to test Higgs anomalous couplings. Here we apply this fact to fit the latest Higgs data. The new aspect of our analysis is a systematic interpretation of the fit parameters within an EFT. Concentrating on the processes of Higgs production and decay that have been measured so far, six parameters turn out to be relevant: [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text]. A global Bayesian fit is then performed with the result [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text]. Additionally, we show how this leading-order parametrization can be generalized to next-to-leading order, thus improving the [Formula: see text] formalism systematically. The differences with a linear EFT analysis including operators of dimension six are also discussed. One of the main conclusions of our analysis is that since the conventional [Formula: see text] formalism can be properly justified within a QFT framework, it should continue to play a central role in analyzing and interpreting Higgs data.

  3. Extinction-ratio-independent electrical method for measuring chirp parameters of Mach-Zehnder modulators using frequency-shifted heterodyne.

    PubMed

    Zhang, Shangjian; Wang, Heng; Zou, Xinhai; Zhang, Yali; Lu, Rongguo; Liu, Yong

    2015-06-15

    An extinction-ratio-independent electrical method is proposed for measuring chirp parameters of Mach-Zehnder electric-optic intensity modulators based on frequency-shifted optical heterodyne. The method utilizes the electrical spectrum analysis of the heterodyne products between the intensity modulated optical signal and the frequency-shifted optical carrier, and achieves the intrinsic chirp parameters measurement at microwave region with high-frequency resolution and wide-frequency range for the Mach-Zehnder modulator with a finite extinction ratio. Moreover, the proposed method avoids calibrating the responsivity fluctuation of the photodiode in spite of the involved photodetection. Chirp parameters as a function of modulation frequency are experimentally measured and compared to those with the conventional optical spectrum analysis method. Our method enables an extinction-ratio-independent and calibration-free electrical measurement of Mach-Zehnder intensity modulators by using the high-resolution frequency-shifted heterodyne technique.

  4. Performance review using sequential sampling and a practice computer.

    PubMed

    Difford, F

    1988-06-01

    The use of sequential sample analysis for repeated performance review is described with examples from several areas of practice. The value of a practice computer in providing a random sample from a complete population, evaluating the parameters of a sequential procedure, and producing a structured worksheet is discussed. It is suggested that sequential analysis has advantages over conventional sampling in the area of performance review in general practice.

  5. Process development and exergy cost sensitivity analysis of a hybrid molten carbonate fuel cell power plant and carbon dioxide capturing process

    NASA Astrophysics Data System (ADS)

    Mehrpooya, Mehdi; Ansarinasab, Hojat; Moftakhari Sharifzadeh, Mohammad Mehdi; Rosen, Marc A.

    2017-10-01

    An integrated power plant with a net electrical power output of 3.71 × 105 kW is developed and investigated. The electrical efficiency of the process is found to be 60.1%. The process includes three main sub-systems: molten carbonate fuel cell system, heat recovery section and cryogenic carbon dioxide capturing process. Conventional and advanced exergoeconomic methods are used for analyzing the process. Advanced exergoeconomic analysis is a comprehensive evaluation tool which combines an exergetic approach with economic analysis procedures. With this method, investment and exergy destruction costs of the process components are divided into endogenous/exogenous and avoidable/unavoidable parts. Results of the conventional exergoeconomic analyses demonstrate that the combustion chamber has the largest exergy destruction rate (182 MW) and cost rate (13,100 /h). Also, the total process cost rate can be decreased by reducing the cost rate of the fuel cell and improving the efficiency of the combustion chamber and heat recovery steam generator. Based on the total avoidable endogenous cost rate, the priority for modification is the heat recovery steam generator, a compressor and a turbine of the power plant, in rank order. A sensitivity analysis is done to investigate the exergoeconomic factor parameters through changing the effective parameter variations.

  6. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  7. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  8. Automated quantification of pancreatic β-cell mass

    PubMed Central

    Golson, Maria L.; Bush, William S.

    2014-01-01

    β-Cell mass is a parameter commonly measured in studies of islet biology and diabetes. However, the rigorous quantification of pancreatic β-cell mass using conventional histological methods is a time-consuming process. Rapidly evolving virtual slide technology with high-resolution slide scanners and newly developed image analysis tools has the potential to transform β-cell mass measurement. To test the effectiveness and accuracy of this new approach, we assessed pancreata from normal C57Bl/6J mice and from mouse models of β-cell ablation (streptozotocin-treated mice) and β-cell hyperplasia (leptin-deficient mice), using a standardized systematic sampling of pancreatic specimens. Our data indicate that automated analysis of virtual pancreatic slides is highly reliable and yields results consistent with those obtained by conventional morphometric analysis. This new methodology will allow investigators to dramatically reduce the time required for β-cell mass measurement by automating high-resolution image capture and analysis of entire pancreatic sections. PMID:24760991

  9. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    PubMed

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.

  10. Factorial design studies of antiretroviral drug-loaded stealth liposomal injectable: PEGylation, lyophilization and pharmacokinetic studies

    NASA Astrophysics Data System (ADS)

    Sudhakar, Beeravelli; Krishna, Mylangam Chaitanya; Murthy, Kolapalli Venkata Ramana

    2016-01-01

    The aim of the present study was to formulate and evaluate the ritonavir-loaded stealth liposomes by using 32 factorial design and intended to delivered by parenteral delivery. Liposomes were prepared by ethanol injection method using 32 factorial designs and characterized for various physicochemical parameters such as drug content, size, zeta potential, entrapment efficiency and in vitro drug release. The optimization process was carried out using desirability and overlay plots. The selected formulation was subjected to PEGylation using 10 % PEG-10000 solution. Stealth liposomes were characterized for the above-mentioned parameters along with surface morphology, Fourier transform infrared spectrophotometer, differential scanning calorimeter, stability and in vivo pharmacokinetic studies in rats. Stealth liposomes showed better result compared to conventional liposomes due to effect of PEG-10000. The in vivo studies revealed that stealth liposomes showed better residence time compared to conventional liposomes and pure drug solution. The conventional liposomes and pure drug showed dose-dependent pharmacokinetics, whereas stealth liposomes showed long circulation half-life compared to conventional liposomes and pure ritonavir solution. The results of statistical analysis showed significance difference as the p value is (<0.05) by one-way ANOVA. The result of the present study revealed that stealth liposomes are promising tool in antiretroviral therapy.

  11. Conceptual Design and Performance Analysis for a Large Civil Compound Helicopter

    NASA Technical Reports Server (NTRS)

    Russell, Carl; Johnson, Wayne

    2012-01-01

    A conceptual design study of a large civil compound helicopter is presented. The objective is to determine how a compound helicopter performs when compared to both a conventional helicopter and a tiltrotor using a design mission that is shorter than optimal for a tiltrotor and longer than optimal for a helicopter. The designs are generated and analyzed using conceptual design software and are further evaluated with a comprehensive rotorcraft analysis code. Multiple metrics are used to determine the suitability of each design for the given mission. Plots of various trade studies and parameter sweeps as well as comprehensive analysis results are presented. The results suggest that the compound helicopter examined for this study would not be competitive with a tiltrotor or conventional helicopter, but multiple possibilities are identified for improving the performance of the compound helicopter in future research.

  12. Financial Indicators of Reduced Impact Logging Performance in Brazil: Case Study Comparisons

    Treesearch

    Thomas P. Holmes; Frederick Boltz; Douglas R. Carter

    2001-01-01

    Indicators of financial performance are compared for three case studies in the Brazilian Amazon. Each case study presents parameters obtained from monitoring initial harvest entries into primary forests for reduced impact logging (RIL) and conventional logging (CL) operations. Differences in cost definitions and data collection protocols complicate the analysis, and...

  13. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  14. Thermal effusivity measurement of conventional and organic coffee oils via photopyroelectric technique.

    PubMed

    Bedoya, A; Gordillo-Delgado, F; Cruz-Santillana, Y E; Plazas, J; Marin, E

    2017-12-01

    In this work, oil samples extracted from organic and conventional coffee beans were studied. A fatty acids profile analysis was done using gas chromatography and physicochemical analysis of density and acidity index to verify the oil purity. Additionally, Mid-Infrared Fourier Transform Photoacoustic Spectroscopy (FTIR-PAS) aided by Principal Component Analysis (PCA) was used to identify differences between the intensities of the absorption bands related to functional groups. Thermal effusivity values between 592±3 and 610±4Ws 1/2 m -2 K -1 were measured using the photopyroelectric technique in a front detection configuration. The acidity index was between 1.11 and 1.27% and the density changed between 0.921 and 0.94g/mL. These variables, as well as the extraction yield between 12,6 and 14,4%, showed a similar behavior than that observed for the thermal effusivity, demonstrating that this parameter can be used as a criterion for discrimination between oil samples extracted from organic and conventional coffee beans. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  16. Improving Heart rate variability in sleep apnea patients: differences in treatment with auto-titrating positive airway pressure (APAP) versus conventional CPAP.

    PubMed

    Karasulu, Levent; Epöztürk, Pinar Ozkan; Sökücü, Sinem Nedime; Dalar, Levent; Altin, Sedat

    2010-08-01

    The effect of positive airway pressure treatments in different modalities on the cardiovascular consequences of the disease in sleep apnea patients is still unclear. We aimed to compare auto-titrating positive airway pressure (APAP) and conventional continuous positive airway pressure (CPAP) in terms of improving heart rate variability (HRV) in obstructive sleep apnea patients. This was a prospective study done in a tertiary research hospital. All patients underwent a manual CPAP titration procedure to determine the optimal pressure that abolishes abnormal respiratory events. Then patients underwent two treatment nights, one under APAP mode and one under conventional CPAP mode with a 1-week interval. Forty newly diagnosed obstructive sleep apnea patients were enrolled in the study. We compared heart rate variability analysis parameters between the APAP night and the CPAP night. This final analysis included the data of 28 patients (M/F: 22/6; mean age = 46 +/- 10 years). Sleep characteristics were comparable between the two treatment nights, whereas all-night time domains of HRV analysis such as HF, nuLF, and LF/HF were different between APAP and CPAP nights (2.93 +/- 0.31 vs. 3.01 +/- 0.31; P = 0.041; 0.75 +/- 0.13 vs. 0.71 +/- 0.14; P = 0.027; and 4.37 +/- 3.24 vs. 3.56 +/- 2.07; P = 0.023, respectively). HRV analysis for individual sleep stages showed that Stage 2 LF, nuLF, nuHF, LF/HF parameters entirely improved under CPAP treatment whereas APAP treatment resulted in nonsignificant changes. These results suggest that despite comparable improvement in abnormal respiratory events with APAP or CPAP treatments, CPAP may be superior to APAP in terms of correcting cardiovascular alterations in sleep apnea patients.

  17. Deep ECGNet: An Optimal Deep Learning Framework for Monitoring Mental Stress Using Ultra Short-Term ECG Signals.

    PubMed

    Hwang, Bosun; You, Jiwoo; Vaessen, Thomas; Myin-Germeys, Inez; Park, Cheolsoo; Zhang, Byoung-Tak

    2018-02-08

    Stress recognition using electrocardiogram (ECG) signals requires the intractable long-term heart rate variability (HRV) parameter extraction process. This study proposes a novel deep learning framework to recognize the stressful states, the Deep ECGNet, using ultra short-term raw ECG signals without any feature engineering methods. The Deep ECGNet was developed through various experiments and analysis of ECG waveforms. We proposed the optimal recurrent and convolutional neural networks architecture, and also the optimal convolution filter length (related to the P, Q, R, S, and T wave durations of ECG) and pooling length (related to the heart beat period) based on the optimization experiments and analysis on the waveform characteristics of ECG signals. The experiments were also conducted with conventional methods using HRV parameters and frequency features as a benchmark test. The data used in this study were obtained from Kwangwoon University in Korea (13 subjects, Case 1) and KU Leuven University in Belgium (9 subjects, Case 2). Experiments were designed according to various experimental protocols to elicit stressful conditions. The proposed framework to recognize stress conditions, the Deep ECGNet, outperformed the conventional approaches with the highest accuracy of 87.39% for Case 1 and 73.96% for Case 2, respectively, that is, 16.22% and 10.98% improvements compared with those of the conventional HRV method. We proposed an optimal deep learning architecture and its parameters for stress recognition, and the theoretical consideration on how to design the deep learning structure based on the periodic patterns of the raw ECG data. Experimental results in this study have proved that the proposed deep learning model, the Deep ECGNet, is an optimal structure to recognize the stress conditions using ultra short-term ECG data.

  18. Dynamic Contrast-enhanced MR Imaging in Renal Cell Carcinoma: Reproducibility of Histogram Analysis on Pharmacokinetic Parameters

    PubMed Central

    Wang, Hai-yi; Su, Zi-hua; Xu, Xiao; Sun, Zhi-peng; Duan, Fei-xue; Song, Yuan-yuan; Li, Lu; Wang, Ying-wei; Ma, Xin; Guo, Ai-tao; Ma, Lin; Ye, Hui-yi

    2016-01-01

    Pharmacokinetic parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) have been increasingly used to evaluate the permeability of tumor vessel. Histogram metrics are a recognized promising method of quantitative MR imaging that has been recently introduced in analysis of DCE-MRI pharmacokinetic parameters in oncology due to tumor heterogeneity. In this study, 21 patients with renal cell carcinoma (RCC) underwent paired DCE-MRI studies on a 3.0 T MR system. Extended Tofts model and population-based arterial input function were used to calculate kinetic parameters of RCC tumors. Mean value and histogram metrics (Mode, Skewness and Kurtosis) of each pharmacokinetic parameter were generated automatically using ImageJ software. Intra- and inter-observer reproducibility and scan–rescan reproducibility were evaluated using intra-class correlation coefficients (ICCs) and coefficient of variation (CoV). Our results demonstrated that the histogram method (Mode, Skewness and Kurtosis) was not superior to the conventional Mean value method in reproducibility evaluation on DCE-MRI pharmacokinetic parameters (K trans & Ve) in renal cell carcinoma, especially for Skewness and Kurtosis which showed lower intra-, inter-observer and scan-rescan reproducibility than Mean value. Our findings suggest that additional studies are necessary before wide incorporation of histogram metrics in quantitative analysis of DCE-MRI pharmacokinetic parameters. PMID:27380733

  19. Performance analysis of cutting graphite-epoxy composite using a 90,000psi abrasive waterjet

    NASA Astrophysics Data System (ADS)

    Choppali, Aiswarya

    Graphite-epoxy composites are being widely used in many aerospace and structural applications because of their properties: which include lighter weight, higher strength to weight ratio and a greater flexibility in design. However, the inherent anisotropy of these composites makes it difficult to machine them using conventional methods. To overcome the major issues that develop with conventional machining such as fiber pull out, delamination, heat generation and high tooling costs, an effort is herein made to study abrasive waterjet machining of composites. An abrasive waterjet is used to cut 1" thick graphite epoxy composites based on baseline data obtained from the cutting of ¼" thick material. The objective of this project is to study the surface roughness of the cut surface with a focus on demonstrating the benefits of using higher pressures for cutting composites. The effects of major cutting parameters: jet pressure, traverse speed, abrasive feed rate and cutting head size are studied at different levels. Statistical analysis of the experimental data provides an understanding of the effect of the process parameters on surface roughness. Additionally, the effect of these parameters on the taper angle of the cut is studied. The data is analyzed to obtain a set of process parameters that optimize the cutting of 1" thick graphite-epoxy composite. The statistical analysis is used to validate the experimental data. Costs involved in the cutting process are investigated in term of abrasive consumed to better understand and illustrate the practical benefits of using higher pressures. It is demonstrated that, as pressure increased, ultra-high pressure waterjets produced a better surface quality at a faster traverse rate with lower costs.

  20. Spatial analysis improves the detection of early corneal nerve fiber loss in patients with recently diagnosed type 2 diabetes

    PubMed Central

    Winter, Karsten; Strom, Alexander; Zhivov, Andrey; Allgeier, Stephan; Papanas, Nikolaos; Ziegler, Iris; Brüggemann, Jutta; Ringel, Bernd; Peschel, Sabine; Köhler, Bernd; Stachs, Oliver; Guthoff, Rudolf F.; Roden, Michael

    2017-01-01

    Corneal confocal microscopy (CCM) has revealed reduced corneal nerve fiber (CNF) length and density (CNFL, CNFD) in patients with diabetes, but the spatial pattern of CNF loss has not been studied. We aimed to determine whether spatial analysis of the distribution of corneal nerve branching points (CNBPs) may contribute to improving the detection of early CNF loss. We hypothesized that early CNF decline follows a clustered rather than random distribution pattern of CNBPs. CCM, nerve conduction studies (NCS), and quantitative sensory testing (QST) were performed in a cross-sectional study including 86 patients recently diagnosed with type 2 diabetes and 47 control subjects. In addition to CNFL, CNFD, and branch density (CNBD), CNBPs were analyzed using spatial point pattern analysis (SPPA) including 10 indices and functional statistics. Compared to controls, patients with diabetes showed lower CNBP density and higher nearest neighbor distances, and all SPPA parameters indicated increased clustering of CNBPs (all P<0.05). SPPA parameters were abnormally increased >97.5th percentile of controls in up to 23.5% of patients. When combining an individual SPPA parameter with CNFL, ≥1 of 2 indices were >99th or <1st percentile of controls in 28.6% of patients compared to 2.1% of controls, while for the conventional CNFL/CNFD/CNBD combination the corresponding rates were 16.3% vs 2.1%. SPPA parameters correlated with CNFL and several NCS and QST indices in the controls (all P<0.001), whereas in patients with diabetes these correlations were markedly weaker or lost. In conclusion, SPPA reveals increased clustering of early CNF loss and substantially improves its detection when combined with a conventional CCM measure in patients with recently diagnosed type 2 diabetes. PMID:28296936

  1. The Effect of Improved Sub-Daily Earth Rotation Models on Global GPS Data Processing

    NASA Astrophysics Data System (ADS)

    Yoon, S.; Choi, K. K.

    2017-12-01

    Throughout the various International GNSS Service (IGS) products, strong periodic signals have been observed around the 14 day period. This signal is clearly visible in all IGS time-series such as those related to orbit ephemerides, Earth rotation parameters (ERP) and ground station coordinates. Recent studies show that errors in the sub-daily Earth rotation models are the main factors that induce such noise. Current IGS orbit processing standards adopted the IERS 2010 convention and its sub-daily Earth rotation model. Since the IERS convention had published, recent advances in the VLBI analysis have made contributions to update the sub-daily Earth rotation models. We have compared several proposed sub-daily Earth rotation models and show the effect of using those models on orbit ephemeris, Earth rotation parameters and ground station coordinates generated by the NGS global GPS data processing strategy.

  2. Comparison of automated satellite systems with conventional systems for hydrologic data collection in west-central Florida

    USGS Publications Warehouse

    Woodham, W.M.

    1982-01-01

    This report provides results of reliability and cost-effective studies of the goes satellite data-collection system used to operate a small hydrologic data network in west-central Florida. The GOES system, in its present state of development, was found to be about as reliable as conventional methods of data collection. Benefits of using the GOES system include some cost and manpower reduction, improved data accuracy, near real-time data availability, and direct computer storage and analysis of data. The GOES system could allow annual manpower reductions of 19 to 23 percent with reduction in cost for some and increase in cost for other single-parameter sites, such as streamflow, rainfall, and ground-water monitoring stations. Manpower reductions of 46 percent or more appear possible for multiple-parameter sites. Implementation of expected improvements in instrumentation and data handling procedures should further reduce costs. (USGS)

  3. Thermodynamic and economic analysis of heat pumps for energy recovery in industrial processes

    NASA Astrophysics Data System (ADS)

    Urdaneta-B, A. H.; Schmidt, P. S.

    1980-09-01

    A computer code has been developed for analyzing the thermodynamic performance, cost and economic return for heat pump applications in industrial heat recovery. Starting with basic defining characteristics of the waste heat stream and the desired heat sink, the algorithm first evaluates the potential for conventional heat recovery with heat exchangers, and if applicable, sizes the exchanger. A heat pump system is then designed to process the residual heating and cooling requirements of the streams. In configuring the heat pump, the program searches a number of parameters, including condenser temperature, evaporator temperature, and condenser and evaporator approaches. All system components are sized for each set of parameters, and economic return is estimated and compared with system economics for conventional processing of the heated and cooled streams (i.e., with process heaters and coolers). Two case studies are evaluated, one in a food processing application and the other in an oil refinery unit.

  4. Vastly accelerated linear least-squares fitting with numerical optimization for dual-input delay-compensated quantitative liver perfusion mapping.

    PubMed

    Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal

    2018-04-01

    To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. SU-F-R-53: CT-Based Radiomics Analysis of Non-Small Cell Lung Cancer Patients Treated with Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, E; Coroller, T; Narayan, V

    Purpose: Stereotactic body radiation therapy (SBRT) is the standard of care for medically inoperable non-small cell lung cancer (NSCLC) patients and has demonstrated excellent local control and survival. However, some patients still develop distant metastases and local recurrence, and therefore, there is a clinical need to identify patients at high-risk of disease recurrence. The aim of the current study is to use a radiomics approach to identify imaging biomarkers, based on tumor phenotype, for clinical outcomes in SBRT patients. Methods: Radiomic features were extracted from free breathing computed tomography (CT) images of 113 Stage I-II NSCLC patients treated with SBRT.more » Their association to and prognostic performance for distant metastasis (DM), locoregional recurrence (LRR) and survival was assessed and compared with conventional features (tumor volume and diameter) and clinical parameters (e.g. performance status, overall stage). The prognostic performance was evaluated using the concordance index (CI). Multivariate model performance was evaluated using cross validation. All p-values were corrected for multiple testing using the false discovery rate. Results: Radiomic features were associated with DM (one feature), LRR (one feature) and survival (four features). Conventional features were only associated with survival and one clinical parameter was associated with LRR and survival. One radiomic feature was significantly prognostic for DM (CI=0.670, p<0.1 from random), while none of the conventional and clinical parameters were significant for DM. The multivariate radiomic model had a higher median CI (0.671) for DM than the conventional (0.618) and clinical models (0.617). Conclusion: Radiomic features have potential to be imaging biomarkers for clinical outcomes that conventional imaging metrics and clinical parameters cannot predict in SBRT patients, such as distant metastasis. Development of a radiomics biomarker that can identify patients at high-risk of recurrence could facilitate personalization of their treatment regimen for an optimized clinical outcome. R.M. had consulting interest with Amgen (ended in 2015).« less

  6. Flow Cytometry Data Preparation Guidelines for Improved Automated Phenotypic Analysis.

    PubMed

    Jimenez-Carretero, Daniel; Ligos, José M; Martínez-López, María; Sancho, David; Montoya, María C

    2018-05-15

    Advances in flow cytometry (FCM) increasingly demand adoption of computational analysis tools to tackle the ever-growing data dimensionality. In this study, we tested different data input modes to evaluate how cytometry acquisition configuration and data compensation procedures affect the performance of unsupervised phenotyping tools. An analysis workflow was set up and tested for the detection of changes in reference bead subsets and in a rare subpopulation of murine lymph node CD103 + dendritic cells acquired by conventional or spectral cytometry. Raw spectral data or pseudospectral data acquired with the full set of available detectors by conventional cytometry consistently outperformed datasets acquired and compensated according to FCM standards. Our results thus challenge the paradigm of one-fluorochrome/one-parameter acquisition in FCM for unsupervised cluster-based analysis. Instead, we propose to configure instrument acquisition to use all available fluorescence detectors and to avoid integration and compensation procedures, thereby using raw spectral or pseudospectral data for improved automated phenotypic analysis. Copyright © 2018 by The American Association of Immunologists, Inc.

  7. Effects of cutting parameters and machining environments on surface roughness in hard turning using design of experiment

    NASA Astrophysics Data System (ADS)

    Mia, Mozammel; Bashir, Mahmood Al; Dhar, Nikhil Ranjan

    2016-07-01

    Hard turning is gradually replacing the time consuming conventional turning process, which is typically followed by grinding, by producing surface quality compatible to grinding. The hard turned surface roughness depends on the cutting parameters, machining environments and tool insert configurations. In this article the variation of the surface roughness of the produced surfaces with the changes in tool insert configuration, use of coolant and different cutting parameters (cutting speed, feed rate) has been investigated. This investigation was performed in machining AISI 1060 steel, hardened to 56 HRC by heat treatment, using coated carbide inserts under two different machining environments. The depth of cut, fluid pressure and material hardness were kept constant. The Design of Experiment (DOE) was performed to determine the number and combination sets of different cutting parameters. A full factorial analysis has been performed to examine the effect of main factors as well as interaction effect of factors on surface roughness. A statistical analysis of variance (ANOVA) was employed to determine the combined effect of cutting parameters, environment and tool configuration. The result of this analysis reveals that environment has the most significant impact on surface roughness followed by feed rate and tool configuration respectively.

  8. Effectiveness of Vocal Therapy for the Elderly When Applying Conventional and Intensive Approaches: A Randomized Clinical Trial.

    PubMed

    Godoy, Juliana; Silverio, Kelly; Brasolotto, Alcione

    2018-05-21

    The aim of this study was to verify the effects of the method Vocal Therapy for the Elderly and the differences in treatment efficacy when it was administered intensively or in the conventional way. Twenty-seven elderly individuals were randomized into two groups and referred for 16 sessions of vocal therapy. The Intensive Group (IG) had therapy four times a week, whereas the Conventional Group had it twice a week. The effects of the therapy were assessed by auditory-perceptual analysis, the Voice-Related Quality of Life protocol, and visual-perceptive analysis of laryngoscopy examinations. The first stage consisted of evaluating the vocal quality and self-assessment of 15 subjects before and after a time period equal to that which they would undergo in vocal therapy. The second stage consisted of comparing the assessments of all participants in the week preceding the beginning of treatment, in the week following the end of treatment, and 1 month after that. There was no difference between perceptual voice parameters and self-assessment when the subjects were not undergoing therapy. When comparing the periods immediately before and after therapy, there was improvement in vocal quality and Voice-Related Quality of Life. One month later, the benefits that had been revealed through the self-assessment protocol, and some of the improvements in vocal parameters were maintained. There was no difference between the IG and Conventional Group with the exception of vocal fold bowing, which decreased in the IG group. The Vocal Therapy for the Elderly program is effective for treating voice presbyphonia. An intensive approach may be superior with regard to vocal fold bowing. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. Effects of hemolysis and lipemia interference on kaolin-activated thromboelastography, and comparison with conventional coagulation tests.

    PubMed

    Tang, Ning; Jin, Xi; Sun, Ziyong; Jian, Cui

    2017-04-01

    The effects of hemolysis and lipemia on thromboelastography (TEG) analysis have been scarcely evaluated in human samples, and neglected in clinical practice. We aimed to investigate the effects of in vitro mechanical hemolysis and lipemia on TEG analysis and conventional coagulation tests. Twenty-four healthy volunteers were enrolled in the study. Besides the controls, three groups with slight, moderate and severe mechanical hemolysis were constituted according to free hemoglobin (Hb) concentrations of 0.5-1.0, 2.0-6.0 and 7.0-13.0 g/L, respectively; and three groups with mild, moderate and high lipemia were established according to triglyceride concentrations of ∼6.0, ∼12.0, and ∼18.0 mmol/L, respectively. Four TEG parameters, reaction time (R), coagulation time (K), angle (α), and maximum amplitude (MA), were measured alongside conventional plasma tests including prothrombin time (PT), activated partial thromboplastin time (APTT) and fibrinogen (FIB) by mechanical method, and platelet count by optical method. Results showed that the median R and MA values at moderate and severe hemolysis and K at severe hemolysis exceeded respective reference intervals, and were considered unacceptable. Median values of TEG parameters in lipemic samples were all within reference intervals. Bias values of conventional plasma tests PT, APTT and FIB in hemolyzed or lipemic samples were all lower than the Clinical Laboratory Improvement Amendments (CLIA) allowable limits. Bias values of platelet count at moderate to severe hemolysis and lipemia exceeded the CLIA allowable limits. In conclusion, the detection of TEG was in general more affected by mechanical hemolysis than plasma coagulation tests. Pre-analytical variables should be taken into account when unexpected TEG results are obtained.

  10. Design study for a two-color beta measurement system

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Design analysis of the beam splitter combined two color beta system is presented. Conventional and dichroic beam splitters are discussed. Design analysis of the beta system employing two beams with focusing at separate points is presented. Alterations and basic parameters of the two beam system are discussed. Alterations in the focus of the initial laser and the returning beams are also discussed. Heterodyne efficiencies for the on axis and off axis reflected radiation are included.

  11. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  12. Simulation-based analysis of performance parameters of microstrip antennas with criss-cross metamaterial-based artificial substrate

    NASA Astrophysics Data System (ADS)

    Inamdar, Kirti; Kosta, Y. P.; Patnaik, S.

    2014-10-01

    In this paper, we present the design of a metamaterial-based microstrip patch antenna, optimized for bandwidth and multiple frequency operations. A criss-cross structure has been proposed, this shape has been inspired from the famous Jerusalem cross. The theory and design formulas to calculate various parameters of the proposed antenna have been presented. Design starts with the analysis of the proposed unit cell structure, and validating the response using software- HFSS Version 13, to obtain negative response of ε and μ- metamaterial. Following this, a metamaterial-based-microstrip-patch-antenna is designed. A detailed comparative study is conducted exploring the response of the designed patch made of metamaterial and that of the conventional patch. Finally, antenna parameters such as gain, bandwidth, radiation pattern, and multiple frequency responses are investigated and optimised for the same and present in table and response graphs. It is also observed that the physical dimension of the metamaterial-based patch antenna is smaller compared to its conventional counterpart operating at the same fundamental frequency. The challenging part was to develop metamaterial based on some signature structures and techniques that would offer advantage in terms of BW and multiple frequency operation, which is demonstrated in this paper. The unique shape proposed in this paper gives improvement in bandwidth without reducing the gain of the antenna.

  13. Heat and mass transfer of Williamson nanofluid flow yield by an inclined Lorentz force over a nonlinear stretching sheet

    NASA Astrophysics Data System (ADS)

    Khan, Mair; Malik, M. Y.; Salahuddin, T.; Hussian, Arif.

    2018-03-01

    The present analysis is devoted to explore the computational solution of the problem addressing the variable viscosity and inclined Lorentz force effects on Williamson nanofluid over a stretching sheet. Variable viscosity is assumed to vary as a linear function of temperature. The basic mathematical modelled problem i.e. system of PDE's is converted nonlinear into ODE's via applying suitable transformations. Computational solutions of the problem is also achieved via efficient numerical technique shooting. Characteristics of controlling parameters i.e. stretching index, inclined angle, Hartmann number, Weissenberg number, variable viscosity parameter, mixed convention parameter, Brownian motion parameter, Prandtl number, Lewis number, thermophoresis parameter and chemical reactive species on concentration, temperature and velocity gradient. Additionally, friction factor coefficient, Nusselt number and Sherwood number are describe with the help of graphics as well as tables verses flow controlling parameters.

  14. dada - a web-based 2D detector analysis tool

    NASA Astrophysics Data System (ADS)

    Osterhoff, Markus

    2017-06-01

    The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.

  15. Correlation analysis between pulmonary function test parameters and CT image parameters of emphysema

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Pei; Li, Chia-Chen; Yu, Chong-Jen; Chang, Yeun-Chung; Wang, Cheng-Yi; Yu, Wen-Kuang; Chen, Chung-Ming

    2016-03-01

    Conventionally, diagnosis and severity classification of Chronic Obstructive Pulmonary Disease (COPD) are usually based on the pulmonary function tests (PFTs). To reduce the need of PFT for the diagnosis of COPD, this paper proposes a correlation model between the lung CT images and the crucial index of the PFT, FEV1/FVC, a severity index of COPD distinguishing a normal subject from a COPD patient. A new lung CT image index, Mirage Index (MI), has been developed to describe the severity of COPD primarily with emphysema disease. Unlike conventional Pixel Index (PI) which takes into account all voxels with HU values less than -950, the proposed approach modeled these voxels by different sizes of bullae balls and defines MI as a weighted sum of the percentages of the bullae balls of different size classes and locations in a lung. For evaluation of the efficacy of the proposed model, 45 emphysema subjects of different severity were involved in this study. In comparison with the conventional index, PI, the correlation between MI and FEV1/FVC is -0.75+/-0.08, which substantially outperforms the correlation between PI and FEV1/FVC, i.e., -0.63+/-0.11. Moreover, we have shown that the emphysematous lesion areas constituted by small bullae balls are basically irrelevant to FEV1/FVC. The statistical analysis and special case study results show that MI can offer better assessment in different analyses.

  16. Semiparametric Bayesian analysis of gene-environment interactions with error in measurement of environmental covariates and missing genetic data.

    PubMed

    Lobach, Iryna; Mallick, Bani; Carroll, Raymond J

    2011-01-01

    Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.

  17. Slow Crack Growth Analysis of Brittle Materials with Finite Thickness Subjected to Constant Stress-Rate Flexural Loading

    NASA Technical Reports Server (NTRS)

    Chio, S. R.; Gyekenyesi, J. P.

    1999-01-01

    A two-dimensional, numerical analysis of slow crack growth (SCG) was performed for brittle materials with finite thickness subjected to constant stress-rate ("dynamic fatigue") loading in flexure. The numerical solution showed that the conventional, simple, one-dimensional analytical solution can be used with a maximum error of about 5% in determining the SCG parameters of a brittle material with the conditions of a normalized thickness (a ratio of specimen thickness to initial crack size) T > 3.3 and of a SCG parameter n > 10. The change in crack shape from semicircular to elliptical configurations was significant particularly at both low stress rate and low T, attributed to predominant difference in stress intensity factor along the crack front. The numerical solution of SCG parameters was supported within the experimental range by the data obtained from constant stress-rate flexural testing for soda-lime glass microslides at ambient temperature.

  18. Identification of Optimum Magnetic Behavior of NanoCrystalline CmFeAl Type Heusler Alloy Powders Using Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Srivastava, Y.; Srivastava, S.; Boriwal, L.

    2016-09-01

    Mechanical alloying is a novelistic solid state process that has received considerable attention due to many advantages over other conventional processes. In the present work, Co2FeAl healer alloy powder, prepared successfully from premix basic powders of Cobalt (Co), Iron (Fe) and Aluminum (Al) in stoichiometric of 60Co-26Fe-14Al (weight %) by novelistic mechano-chemical route. Magnetic properties of mechanically alloyed powders were characterized by vibrating sample magnetometer (VSM). 2 factor 5 level design matrix was applied to experiment process. Experimental results were used for response surface methodology. Interaction between the input process parameters and the response has been established with the help of regression analysis. Further analysis of variance technique was applied to check the adequacy of developed model and significance of process parameters. Test case study was performed with those parameters, which was not selected for main experimentation but range was same. Response surface methodology, the process parameters must be optimized to obtain improved magnetic properties. Further optimum process parameters were identified using numerical and graphical optimization techniques.

  19. Evaluation of optimized b-value sampling schemas for diffusion kurtosis imaging with an application to stroke patient data

    PubMed Central

    Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong

    2013-01-01

    Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303

  20. A Comparison of the Noise Characteristics of a Conventional Slat and Krueger Flap

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Hutcheson, Florence V.; Thomas, Russell H.; Housman, Jeffery A.

    2016-01-01

    An aeroacoustic test of two types of leading-edge high-lift devices has been conducted in the NASA Langley Quiet Flow Facility. The test compares a conventional slat with a notional equivalent-mission Krueger flap. The test matrix includes points that allow for direct comparison of the conventional and Krueger devices for equivalent-mission configurations, where the two high-lift devices satisfy the same lift requirements for a free air flight path at the same cruise airfoil angle of attack. Measurements are made for multiple Mach numbers and directivity angles. Results indicate that the Krueger flap shows similar agreement to the expected power law scaling of a conventional flap, both in terms of Strouhal number and fixed frequency (as a surrogate for Helmholtz number). Directivity patterns vary depending on the specific slat and Krueger orientations. Varying the slat gap while holding overlap constant has the same influence on both the conventional slat and Krueger flap acoustic signature. Closing the gap shows dramatic reduction in levels for both devices. Varying the Krueger overlap has a different effect on the data when compared to varying the slat overlap, but analysis is limited by acoustic sources that regularly present themselves in model-scale wind tunnel testing but are not present for full-scale vehicles. The Krueger cavity is found to have some influence on level and directivity, though not as much as the other considered parameter variations. Overall, while the spectra of the two devices are different in detail, their scaling behavior for varying parameters is extremely similar.

  1. Bringing a transgenic crop to market: where compositional analysis fits.

    PubMed

    Privalle, Laura S; Gillikin, Nancy; Wandelt, Christine

    2013-09-04

    In the process of developing a biotechnology product, thousands of genes and transformation events are evaluated to select the event that will be commercialized. The ideal event is identified on the basis of multiple characteristics including trait efficacy, the molecular characteristics of the insert, and agronomic performance. Once selected, the commercial event is subjected to a rigorous safety evaluation taking a multipronged approach including examination of the safety of the gene and gene product - the protein, plant performance, impact of cultivating the crop on the environment, agronomic performance, and equivalence of the crop/food to conventional crops/food - by compositional analysis. The compositional analysis is composed of a comparison of the nutrient and antinutrient composition of the crop containing the event, its parental line (variety), and other conventional lines (varieties). Different geographies have different requirements for the compositional analysis studies. Parameters that vary include the number of years (seasons) and locations (environments) to be evaluated, the appropriate comparator(s), analytes to be evaluated, and statistical analysis. Specific examples of compositional analysis results will be presented.

  2. A 3D generic inverse dynamic method using wrench notation and quaternion algebra.

    PubMed

    Dumas, R; Aissaoui, R; de Guise, J A

    2004-06-01

    In the literature, conventional 3D inverse dynamic models are limited in three aspects related to inverse dynamic notation, body segment parameters and kinematic formalism. First, conventional notation yields separate computations of the forces and moments with successive coordinate system transformations. Secondly, the way conventional body segment parameters are defined is based on the assumption that the inertia tensor is principal and the centre of mass is located between the proximal and distal ends. Thirdly, the conventional kinematic formalism uses Euler or Cardanic angles that are sequence-dependent and suffer from singularities. In order to overcome these limitations, this paper presents a new generic method for inverse dynamics. This generic method is based on wrench notation for inverse dynamics, a general definition of body segment parameters and quaternion algebra for the kinematic formalism.

  3. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  4. Comparing the Efficiency of Two Different Extraction Techniques in Removal of Maxillary Third Molars: A Randomized Controlled Trial.

    PubMed

    Edward, Joseph; Aziz, Mubarak A; Madhu Usha, Arjun; Narayanan, Jyothi K

    2017-12-01

    Extractions are routine procedures in dental surgery. Traditional extraction techniques use a combination of severing the periodontal attachment, luxation with an elevator, and removal with forceps. A new technique of extraction of maxillary third molar is introduced in this study-Joedds technique, which is compared with the conventional technique. One hundred people were included in the study, the people were divided into two groups by means of simple random sampling. In one group conventional technique of maxillary third molar extraction was used and on second Joedds technique was used. Statistical analysis was carried out with student's t test. Analysis of 100 patients based on parameters showed that the novel joedds technique had minimal trauma to surrounding tissues, less tuberosity and root fractures and the time taken for extraction was <2 min while compared to other group of patients. This novel technique has proved to be better than conventional third molar extraction technique, with minimal complications. If Proper selection of cases and right technique are used.

  5. In-vitro analysis of forces in conventional and ultrasonically assisted drilling of bone.

    PubMed

    Alam, K; Hassan, Edris; Imran, Syed Husain; Khan, Mushtaq

    2016-05-12

    Drilling of bone is widely performed in orthopaedics for repair and reconstruction of bone. Current paper is focused on the efforts to minimize force generation during the drilling process. Ultrasonically Assisted Drilling (UAD) is a possible option to replace Conventional Drilling (CD) in bone surgical procedures. The purpose of this study was to investigate and analyze the effect of drilling parameters and ultrasonic parameters on the level of drilling thrust force in the presence of water irrigation. Drilling tests were performed on young bovine femoral bone using different parameters such as spindle speeds, feed rates, coolant flow rates, frequency and amplitudes of vibrations. The drilling force was significantly dropped with increase in drill rotation speed in both types of drilling. Increase in feed rate was more influential in raising the drilling force in CD compared to UAD. The force was significantly dropped when ultrasonic vibrations up to 10 kHz were imposed on the drill. The drill force was found to be unaffected by the range of amplitudes and the amount of water supplied to the drilling region in UAD. Low frequency vibrations with irrigation can be successfully used for safe and efficient drilling in bone.

  6. Clinical impact of 18 F-FDG positron emission tomography/CT on adenoid cystic carcinoma of the head and neck.

    PubMed

    Jung, Ji-Hoon; Lee, Sang-Woo; Son, Seung Hyun; Kim, Choon-Young; Lee, Chang-Hee; Jeong, Ju Hye; Jeong, Shin Young; Ahn, Byeong-Cheol; Lee, Jaetae

    2017-03-01

    The purpose of this retrospective study was to assess the diagnostic value of 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT and the prognostic value of metabolic PET parameters in patients with adenoid cystic carcinoma of the head and neck (ACCHN). Forty patients with newly diagnosed ACCHN were enrolled in this study. We investigated the diagnostic value of 18 F-FDG PET/CT for detecting and staging compared to conventional CT. Kaplan-Meier survival analysis for progression-free survival (PFS) was performed with clinicopathological factors and metabolic PET parameters. The 18 F-FDG PET/CT showed comparable sensitivity (92.3%) to conventional CT for lesion detection, and changed staging and management plan in 6 patients (15.0%). Lower PFS rates were associated with advanced T classification, advanced TNM classification, high maximum standardized uptake value (SUVmax; >5.1), and high total lesion glycolysis (>40.1) of the primary tumor. The 18 F-FDG PET/CT can provide additional information for initial staging, and metabolic PET parameters may serve as prognostic factors of ACCHN. © 2016 Wiley Periodicals, Inc. Head Neck 39: 447-455, 2017. © 2016 Wiley Periodicals, Inc.

  7. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  8. Nonlinear, discrete flood event models, 1. Bayesian estimation of parameters

    NASA Astrophysics Data System (ADS)

    Bates, Bryson C.; Townley, Lloyd R.

    1988-05-01

    In this paper (Part 1), a Bayesian procedure for parameter estimation is applied to discrete flood event models. The essence of the procedure is the minimisation of a sum of squares function for models in which the computed peak discharge is nonlinear in terms of the parameters. This objective function is dependent on the observed and computed peak discharges for several storms on the catchment, information on the structure of observation error, and prior information on parameter values. The posterior covariance matrix gives a measure of the precision of the estimated parameters. The procedure is demonstrated using rainfall and runoff data from seven Australian catchments. It is concluded that the procedure is a powerful alternative to conventional parameter estimation techniques in situations where a number of floods are available for parameter estimation. Parts 2 and 3 will discuss the application of statistical nonlinearity measures and prediction uncertainty analysis to calibrated flood models. Bates (this volume) and Bates and Townley (this volume).

  9. Multi-parameter optimization of monolithic high-index contrast grating reflectors

    NASA Astrophysics Data System (ADS)

    Marciniak, Magdalena; Gebski, Marcin; Dems, Maciej; Wasiak, Michał; Czyszanowski, Tomasz

    2016-03-01

    Conventional High-index Contrast Gratings (HCG) consist of periodically distributed high refractive index stripes surrounded by low index media. Practically, such low/high index stack can be fabricated in several ways however low refractive index layers are electrical insulators of poor thermal conductivities. Monolithic High-index Contrast Gratings (MHCGs) overcome those limitations since they can be implemented in any material with a real refractive index larger than 1.75 without the need of the combination of low and high refractive index materials. The freedom of use of various materials allows to provide more efficient current injection and better heat flow through the mirror, in contrary to the conventional HCGs. MHCGs can simplify the construction of VCSELs, reducing their epitaxial design to monolithic wafer with carrier confinement and active region inside and etched stripes on both surfaces in post processing. We present numerical analysis of MHCGs using a three-dimensional, fully vectorial optical model. We investigate possible designs of MHCGs using multidimensional optimization of grating parameters for different refractive indices.

  10. Removal Efficiency of COD, Total P and Total N Components from Municipal Wastewater using Hollow-fibre MBR.

    PubMed

    Petrinić, Irena; Curlin, Mirjana; Korenak, Jasmina; Simonič, Marjana

    2011-06-01

    The membrane bioreactor (MBR) integrates well within the conventionally activated sludge system regarding advanced membrane separation for wastewater treatment. Over the last decade, a number of MBR systems have been constructed worldwide and this system is now accepted as a technology of choice for wastewater treatment especially for municipal wastewater. The aim of this work was to investigate and compare submerged MBR with conventionally-activated sludge system for the treatment of municipal wastewater in Maribor, Slovenia. It can be concluded from the results, that the efficiencies being determined by the parameters were satisfied, such as, chemical oxygen demand, total phosphorous, and total nitrogen, which were 97%, 75%, and 90%, respectively. The efficiencies of ultrafiltration membrane for the same parameters were also determined, and compared with biological treatment. The results of this analysis show an additional effect regarding an improvement in the quality of the permeate but primary treatment is also very important. For successfully application of MBR system smaller grid for primary treatment is needed.

  11. Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com

    We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stackingmore » interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.« less

  12. A model of objective weighting for EIA.

    PubMed

    Ying, L G; Liu, Y C

    1995-06-01

    In spite of progress achieved in the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not as yet, been properly solved. This paper presents an approach of objective weighting by using a procedure of P ij principal component-factor analysis (P ij PCFA), which suits specifically those parameters measured directly by physical scales. The P ij PCFA weighting procedure reforms the conventional weighting practice in two aspects: first, the expert subjective judgment is replaced by the standardized measure P ij as the original input of weight processing and, secondly, the principal component-factor analysis is introduced to approach the environmental parameters for their respective contributions to the totality of the regional ecosystem. Not only is the P ij PCFA weighting logical in theoretical reasoning, it also suits practically all levels of professional routines in natural environmental assessment and impact analysis. Having been assured of objectivity and accuracy in the EIA case study of the Chuansha County in Shanghai, China, the P ij PCFA weighting procedure has the potential to be applied in other geographical fields that need assigning weights to parameters that are measured by physical scales.

  13. Differences in conventional cardiovascular risk factors in two ethnic groups in India.

    PubMed

    Garg, Priyanka Rani; Kabita, Salam; Singh, Huidrom Suraj; Saraswathy, Kallur Nava; Sinha, Ekata; Kalla, Aloke Kumar; Chongtham, Dhanaraj Singh

    2012-01-01

    Studies have been carried out at national and international levels to assess ethnic variations in the prevalence of cardiovascular diseases and their risk factors. However, ethnic variations in the contribution of various risk factors to complex diseases have been scarcely studied. Our study examined such variations in two ethnic groups in India, namely, Meiteis of Manipur (northeast India) and Aggarwals of Delhi (north India). Through random sampling, we selected 635 participants from the Meitei community and 181 Aggarwals from the Aggarwal Dharmarth Hospital, Delhi. Patients with coronary artery disease (CAD) and hypertension were identified based on their recent medical diagnostic history. Anthropometric parameters such as height, weight, waist and hip circumferences along with physiological parameters (blood pressures, both systolic and diastolic) and biochemical parameter (lipid profile) were measured for all study participants. Patient parameters were available from the medical reports recorded when patients were first diagnosed. Among CAD individuals, the Aggarwals showed higher mean values of weight, body mass index (BMI), systolic blood pressure (SBP), diastolic blood pressure (DBP), total cholesterol (TC), triglyceride (TC), low density lipoprotein (LDL), and very low density lipoprotein (VLDL) but had lower high density lipoprotein (HDL) levels than the Meiteis. The same trend for weight, BMI and lipid parameters could be seen among hypertensive individuals. In step-wise regression analysis, SBP, LDL and TG were found to significantly contribute to the risk for CAD in the Aggarwals; whereas in the Meiteis, SBP, VLDL, HDL, TC and LDL were found to significantly contribute to the risk for CAD. In hypertensive Aggarwal participants, SBP, DBP and waist-to-hip ratio were significant contributors for hypertension; whereas SBP, DBP, and height contributed significantly to risk for hypertension among the Meiteis. We found marked differences in conventional risk factors between the two ethnic groups. In India, as found elsewhere, the presence of substructuring of groups and hence, genetic isolation is high. More research is needed within this context to unveil the conventional risk factors for complex diseases.

  14. A flowing atmospheric pressure afterglow as an ion source coupled to a differential mobility analyzer for volatile organic compound detection.

    PubMed

    Bouza, Marcos; Orejas, Jaime; López-Vidal, Silvia; Pisonero, Jorge; Bordel, Nerea; Pereiro, Rosario; Sanz-Medel, Alfredo

    2016-05-23

    Atmospheric pressure glow discharges have been widely used in the last decade as ion sources in ambient mass spectrometry analyses. Here, an in-house flowing atmospheric pressure afterglow (FAPA) has been developed as an alternative ion source for differential mobility analysis (DMA). The discharge source parameters (inter-electrode distance, current and helium flow rate) determining the atmospheric plasma characteristics have been optimized in terms of DMA spectral simplicity with the highest achievable sensitivity while keeping an adequate plasma stability and so the FAPA working conditions finally selected were: 35 mA, 1 L min(-1) of He and an inter-electrode distance of 8 mm. Room temperature in the DMA proved to be adequate for the coupling and chemical analysis with the FAPA source. Positive and negative ions for different volatile organic compounds were tested and analysed by FAPA-DMA using a Faraday cup as a detector and proper operation in both modes was possible (without changes in FAPA operational parameters). The FAPA ionization source showed simpler ion mobility spectra with narrower peaks and a better, or similar, sensitivity than conventional UV-photoionization for DMA analysis in positive mode. Particularly, the negative mode proved to be a promising field of further research for the FAPA ion source coupled to ion mobility, clearly competitive with other more conventional plasmas such as corona discharge.

  15. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    PubMed

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  16. Implementation of interconnect simulation tools in spice

    NASA Technical Reports Server (NTRS)

    Satsangi, H.; Schutt-Aine, J. E.

    1993-01-01

    Accurate computer simulation of high speed digital computer circuits and communication circuits requires a multimode approach to simulate both the devices and the interconnects between devices. Classical circuit analysis algorithms (lumped parameter) are needed for circuit devices and the network formed by the interconnected devices. The interconnects, however, have to be modeled as transmission lines which incorporate electromagnetic field analysis. An approach to writing a multimode simulator is to take an existing software package which performs either lumped parameter analysis or field analysis and add the missing type of analysis routines to the package. In this work a traditionally lumped parameter simulator, SPICE, is modified so that it will perform lossy transmission line analysis using a different model approach. Modifying SPICE3E2 or any other large software package is not a trivial task. An understanding of the programming conventions used, simulation software, and simulation algorithms is required. This thesis was written to clarify the procedure for installing a device into SPICE3E2. The installation of three devices is documented and the installations of the first two provide a foundation for installation of the lossy line which is the third device. The details of discussions are specific to SPICE, but the concepts will be helpful when performing installations into other circuit analysis packages.

  17. Reliability Analysis of Retaining Walls Subjected to Blast Loading by Finite Element Approach

    NASA Astrophysics Data System (ADS)

    GuhaRay, Anasua; Mondal, Stuti; Mohiuddin, Hisham Hasan

    2018-02-01

    Conventional design methods adopt factor of safety as per practice and experience, which are deterministic in nature. The limit state method, though not completely deterministic, does not take into account effect of design parameters, which are inherently variable such as cohesion, angle of internal friction, etc. for soil. Reliability analysis provides a measure to consider these variations into analysis and hence results in a more realistic design. Several studies have been carried out on reliability of reinforced concrete walls and masonry walls under explosions. Also, reliability analysis of retaining structures against various kinds of failure has been done. However, very few research works are available on reliability analysis of retaining walls subjected to blast loading. Thus, the present paper considers the effect of variation of geotechnical parameters when a retaining wall is subjected to blast loading. However, it is found that the variation of geotechnical random variables does not have a significant effect on the stability of retaining walls subjected to blast loading.

  18. Effect of Sling Exercise Training on Balance in Patients with Stroke: A Meta-Analysis

    PubMed Central

    Peng, Qiyuan; Chen, Jingjie; Zou, Yucong; Liu, Gang

    2016-01-01

    Objective This study aims to evaluate the effect of sling exercise training (SET) on balance in patients with stroke. Methods PubMed, Cochrane Library, Ovid LWW, CBM, CNKI, WanFang, and VIP databases were searched for randomized controlled trials of the effect of SET on balance in patients with stroke. The study design and participants were subjected to metrological analysis. Berg balance Scale (BBS), Barthel index score (BI), and Fugl-Meyer Assessment (FMA) were used as independent parameters for evaluating balance function, activities of daily living(ADL) and motor function after stroke respectively, and were subjected to meta-analysis by RevMan5.3 software. Results Nine studies with 460 participants were analyzed. Results of meta-analysis showed that the SET treatment combined with conventional rehabilitation was superior to conventional rehabilitation treatments, with increased degrees of BBS (WMD = 3.81, 95% CI [0.15, 7.48], P = 0.04), BI (WMD = 12.98, 95% CI [8.39, 17.56], P < 0.00001), and FMA (SMD = 0.76, 95% CI [0.41, 1.11], P < 0.0001). Conclusion Based on limited evidence from 9 trials, the SET treatment combined with conventional rehabilitation was superior to conventional rehabilitation treatments, with increased degrees of BBS, BI and FMA, So the SET treatment can improvement of balance function after stroke, but the interpretation of our findings is required to be made with caution due to limitations in included trials such as small sample sizes and the risk of bias. Therefore, more multi-center and large-sampled randomized controlled trials are needed to confirm its clinical applications. PMID:27727288

  19. Designing generalized conic concentrators for conventional optical systems

    NASA Technical Reports Server (NTRS)

    Eichhorn, W. L.

    1985-01-01

    Generalized nonimaging concentrators can be incorporated into conventional optical systems in situations where flux concentration rather than imaging is required. The parameters of the concentrator for maximum flux concentration depend on the design of the particular optical system under consideration. Rationale for determining the concentrator parameters is given for one particular optical system and the procedure used for calculation of these parameters is outlined. The calculations are done for three concentrators applicable to the optical system.

  20. Conventional versus robot-assisted laparoscopic Nissen fundoplication: a comparison of postoperative acid reflux parameters.

    PubMed

    Frazzoni, Marzio; Conigliaro, Rita; Colli, Giovanni; Melotti, Gianluigi

    2012-06-01

    Laparoscopic Nissen fundoplication (LNF) is a technically demanding surgical procedure designed to cure gastroesophageal reflux disease (GERD). It represents an alternative to life-long medical therapy and the only recommended treatment modality to overcome refractoriness to proton pump inhibitor (PPI) therapy. The recent development of robotic systems prompted evaluation of their use in antireflux surgery. Between 1997 and 2000, in a PPI-responsive series we found postoperative normalization of esophageal acid exposure time (EAET) in most but not all cases. Between 2007 and 2009, in a PPI-refractory series we found postoperative normalization of EAET in all cases. We decided to analyze retrospectively our prospectively collected data to evaluate whether differences other than the conventional or robot-assisted technique could justify postoperative differences in acid reflux parameters. Baseline demographic, endoscopic, and manometric parameters were compared between the two series of patients, as well as postoperative manometric and acid reflux parameters. There were no significant differences in the baseline demographic, endoscopic, and manometric characteristics between the two groups of patients. The median lower esophageal sphincter tone increased significantly, and the median EAET decreased significantly after conventional as well as after robot-assisted LNF. The median postoperative EAET was significantly lower in the robot-assisted (0.2%) than in the conventional LNF group (1%; P = 0.001). Abnormal EAET values were found in 6 of 44 (14%) and in 0 of 44 cases after conventional and robot-assisted LNF, respectively (P = 0.026). Robot-assisted LNF provided a significant gain in postoperative acid reflux parameters compared with the conventional technique. In a challenging clinical setting, such as PPI-refractoriness, in which the efficacy of endoscopic or pharmacological treatment modalities is only moderate, even a small therapeutic gain can be clinically relevant. In centers where robot-assisted LNF is available, it should be preferred to conventional LNF in PPI-refractory GERD.

  1. Observational constraint on spherical inhomogeneity with CMB and local Hubble parameter

    NASA Astrophysics Data System (ADS)

    Tokutake, Masato; Ichiki, Kiyotomo; Yoo, Chul-Moon

    2018-03-01

    We derive an observational constraint on a spherical inhomogeneity of the void centered at our position from the angular power spectrum of the cosmic microwave background (CMB) and local measurements of the Hubble parameter. The late time behaviour of the void is assumed to be well described by the so-called Λ-Lemaȋtre-Tolman-Bondi (ΛLTB) solution. Then, we restrict the models to the asymptotically homogeneous models each of which is approximated by a flat Friedmann-Lemaȋtre-Robertson-Walker model. The late time ΛLTB models are parametrized by four parameters including the value of the cosmological constant and the local Hubble parameter. The other two parameters are used to parametrize the observed distance-redshift relation. Then, the ΛLTB models are constructed so that they are compatible with the given distance-redshift relation. Including conventional parameters for the CMB analysis, we characterize our models by seven parameters in total. The local Hubble measurements are reflected in the prior distribution of the local Hubble parameter. As a result of a Markov-Chains-Monte-Carlo analysis for the CMB temperature and polarization anisotropies, we found that the inhomogeneous universe models with vanishing cosmological constant are ruled out as is expected. However, a significant under-density around us is still compatible with the angular power spectrum of CMB and the local Hubble parameter.

  2. Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David

    2015-04-01

    The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.

  3. Composition differences between organic and conventional meat: a systematic literature review and meta-analysis.

    PubMed

    Średnicka-Tober, Dominika; Barański, Marcin; Seal, Chris; Sanderson, Roy; Benbrook, Charles; Steinshamn, Håvard; Gromadzka-Ostrowska, Joanna; Rembiałkowska, Ewa; Skwarło-Sońta, Krystyna; Eyre, Mick; Cozzi, Giulio; Krogh Larsen, Mette; Jordon, Teresa; Niggli, Urs; Sakowski, Tomasz; Calder, Philip C; Burdge, Graham C; Sotiraki, Smaragda; Stefanakis, Alexandros; Yolcu, Halil; Stergiadis, Sokratis; Chatzidimitriou, Eleni; Butler, Gillian; Stewart, Gavin; Leifert, Carlo

    2016-03-28

    Demand for organic meat is partially driven by consumer perceptions that organic foods are more nutritious than non-organic foods. However, there have been no systematic reviews comparing specifically the nutrient content of organic and conventionally produced meat. In this study, we report results of a meta-analysis based on sixty-seven published studies comparing the composition of organic and non-organic meat products. For many nutritionally relevant compounds (e.g. minerals, antioxidants and most individual fatty acids (FA)), the evidence base was too weak for meaningful meta-analyses. However, significant differences in FA profiles were detected when data from all livestock species were pooled. Concentrations of SFA and MUFA were similar or slightly lower, respectively, in organic compared with conventional meat. Larger differences were detected for total PUFA and n-3 PUFA, which were an estimated 23 (95 % CI 11, 35) % and 47 (95 % CI 10, 84) % higher in organic meat, respectively. However, for these and many other composition parameters, for which meta-analyses found significant differences, heterogeneity was high, and this could be explained by differences between animal species/meat types. Evidence from controlled experimental studies indicates that the high grazing/forage-based diets prescribed under organic farming standards may be the main reason for differences in FA profiles. Further studies are required to enable meta-analyses for a wider range of parameters (e.g. antioxidant, vitamin and mineral concentrations) and to improve both precision and consistency of results for FA profiles for all species. Potential impacts of composition differences on human health are discussed.

  4. Image data-processing system for solar astronomy

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.

    1977-01-01

    The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.

  5. Optical eye simulator for laser dazzle events.

    PubMed

    Coelho, João M P; Freitas, José; Williamson, Craig A

    2016-03-20

    An optical simulator of the human eye and its application to laser dazzle events are presented. The simulator combines optical design software (ZEMAX) with a scientific programming language (MATLAB) and allows the user to implement and analyze a dazzle scenario using practical, real-world parameters. Contrary to conventional analytical glare analysis, this work uses ray tracing and the scattering model and parameters for each optical element of the eye. The theoretical background of each such element is presented in relation to the model. The overall simulator's calibration, validation, and performance analysis are achieved by comparison with a simpler model based uponCIE disability glare data. Results demonstrate that this kind of advanced optical eye simulation can be used to represent laser dazzle and has the potential to extend the range of applicability of analytical models.

  6. Room-Temperature Determination of Two-Dimensional Electron Gas Concentration and Mobility in Heterostructures

    NASA Technical Reports Server (NTRS)

    Schacham, S. E.; Mena, R. A.; Haugland, E. J.; Alterovitz, S. A.

    1993-01-01

    A technique for determination of room-temperature two-dimensional electron gas (2DEG) concentration and mobility in heterostructures is presented. Using simultaneous fits of the longitudinal and transverse voltages as a function of applied magnetic field, we were able to separate the parameters associated with the 2DEG from those of the parallel layer. Comparison with the Shubnikov-de Haas data derived from measurements at liquid helium temperatures proves that the analysis of the room-temperature data provides an excellent estimate of the 2DEG concentration. In addition we were able to obtain for the first time the room-temperature mobility of the 2DEG, an important parameter to device application. Both results are significantly different from those derived from conventional Hall analysis.

  7. Comparative study between laser and conventional techniques for class V cavity preparation in gamma-irradiated teeth (in vitro study).

    PubMed

    Rasmy, Amr H M; Harhash, Tarek A; Ghali, Rami M S; El Maghraby, Eman M F; El Rouby, Dalia H

    2017-01-01

    The purpose of this study was to compare laser with conventional techniques in class V cavity preparation in gamma-irradiated teeth. Forty extracted human teeth with no carious lesions were used for this study and were divided into two main groups: Group I (n = 20) was not subjected to gamma radiation (control) and Group II (n=20) was subjected to gamma radiation of 60 Gray. Standard class V preparation was performed in buccal and lingual sides of each tooth in both groups. Buccal surfaces were prepared by the Er,Cr:YSGG laser (Waterlase iPlus) 2780 nm, using the gold handpiece with MZ10 Tip in non-contact and the "H" mode, following parameters of cavity preparation - power 6 W, frequency 50 Hz, 90% water and 70% air, then shifting to surface treatment laser parameters - power 4.5 W, frequency 50 Hz, 80% water and 50% air. Lingual surfaces were prepared by the conventional high-speed turbine using round diamond bur. Teeth were then sectioned mesio-distally, resulting in 80 specimens: 40 of which were buccal laser-treated (20 control and 20 gamma-irradiated specimens) and 40 were lingual conventional high-speed bur specimens (20 control and 20 gamma-irradiated specimens). Microleakage analysis revealed higher scores in both gamma groups compared with control groups. Chi-square test revealed no significant difference between both control groups and gamma groups (p=1, 0.819, respectively). A significant difference was revealed between all 4 groups (p=0.00018). Both laser and conventional high-speed turbine bur show good bond strength in control (non-gamma) group, while microleakage is evident in gamma group, indicating that gamma radiation had a dramatic negative effect on the bond strength in both laser and bur-treated teeth.

  8. Histogram analysis derived from apparent diffusion coefficient (ADC) is more sensitive to reflect serological parameters in myositis than conventional ADC analysis.

    PubMed

    Meyer, Hans Jonas; Emmer, Alexander; Kornhuber, Malte; Surov, Alexey

    2018-05-01

    Diffusion-weighted imaging (DWI) has the potential of being able to reflect histopathology architecture. A novel imaging approach, namely histogram analysis, is used to further characterize tissues on MRI. The aim of this study was to correlate histogram parameters derived from apparent diffusion coefficient (ADC) maps with serological parameters in myositis. 16 patients with autoimmune myositis were included in this retrospective study. DWI was obtained on a 1.5 T scanner by using the b-values of 0 and 1000 s mm - 2 . Histogram analysis was performed as a whole muscle measurement by using a custom-made Matlab-based application. The following ADC histogram parameters were estimated: ADCmean, ADCmax, ADCmin, ADCmedian, ADCmode, and the following percentiles ADCp10, ADCp25, ADCp75, ADCp90, as well histogram parameters kurtosis, skewness, and entropy. In all patients, the blood sample was acquired within 3 days to the MRI. The following serological parameters were estimated: alanine aminotransferase, aspartate aminotransferase, creatine kinase, lactate dehydrogenase, C-reactive protein (CRP) and myoglobin. All patients were screened for Jo1-autobodies. Kurtosis correlated inversely with CRP (p = -0.55 and 0.03). Furthermore, ADCp10 and ADCp90 values tended to correlate with creatine kinase (p = -0.43, 0.11, and p = -0.42, = 0.12 respectively). In addition, ADCmean, p10, p25, median, mode, and entropy were different between Jo1-positive and Jo1-negative patients. ADC histogram parameters are sensitive for detection of muscle alterations in myositis patients. Advances in knowledge: This study identified that kurtosis derived from ADC maps is associated with CRP in myositis patients. Furthermore, several ADC histogram parameters are statistically different between Jo1-positive and Jo1-negative patients.

  9. The Value of Data and Metadata Standardization for Interoperability in Giovanni Or: Why Your Product's Metadata Causes Us Headaches!

    NASA Technical Reports Server (NTRS)

    Smit, Christine; Hegde, Mahabaleshwara; Strub, Richard; Bryant, Keith; Li, Angela; Petrenko, Maksym

    2017-01-01

    Giovanni is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization.

  10. Relationship between Testicular Volume and Conventional or Nonconventional Sperm Parameters

    PubMed Central

    Condorelli, Rosita; Calogero, Aldo E.; La Vignera, Sandro

    2013-01-01

    Background. Reduced testicular volume (TV) (<12 cm3) is associated with lower testicular function. Several studies explored the conventional sperm parameters (concentration, motility, and morphology) and the endocrine function (gonadotropins and testosterone serum concentrations) in the patients with reduction of TV. No other parameters have been examined. Aim. This study aims at evaluating some biofunctional sperm parameters by flow cytometry in the semen of men with reduced TV compared with that of subjects with normal TV. Methods. 78 patients without primary scrotal disease were submitted to ultrasound evaluation of the testis. They were divided into two groups according to testicular volume: A Group, including 40 patients with normal testicular volume (TV > 15 cm3) and B Group, including 38 patients with reduced testicular volume (TV ≤ 12 cm3). All patients underwent serum hormone concentration, conventional and biofunctional (flow cytometry) sperm parameters evaluation. Results. With regard to biofunctional sperm parameters, all values (mitochondrial membrane potential, phosphatidylserine externalization, chromatin compactness, and DNA fragmentation) were strongly negatively correlated with testicular volume (P < 0.0001). Conclusions. This study for the first time in the literature states that the biofunctional sperm parameters worsen and with near linear correlation, with decreasing testicular volume. PMID:24089610

  11. Analysis of a flux-coupling type superconductor fault current limiter with pancake coils

    NASA Astrophysics Data System (ADS)

    Liu, Shizhuo; Xia, Dong; Zhang, Zhifeng; Qiu, Qingquan; Zhang, Guomin

    2017-10-01

    The characteristics of a flux-coupling type superconductor fault current limiter (SFCL) with pancake coils are investigated in this paper. The conventional double-wound non-inductive pancake coil used in AC power systems has an inevitable defect in Voltage Sourced Converter Based High Voltage DC (VSC-HVDC) power systems. Due to its special structure, flashover would occur easily during the fault in high voltage environment. Considering the shortcomings of conventional resistive SFCLs with non-inductive coils, a novel flux-coupling type SFCL with pancake coils is carried out. The module connections of pancake coils are performed. The electromagnetic field and force analysis of the module are contrasted under different parameters. To ensure proper operation of the module, the impedance of the module under representative operating conditions is calculated. Finally, the feasibility of the flux-coupling type SFCL in VSC-HVDC power systems is discussed.

  12. Dataset on the mean, standard deviation, broad-sense heritability and stability of wheat quality bred in three different ways and grown under organic and low-input conventional systems.

    PubMed

    Rakszegi, Marianna; Löschenberger, Franziska; Hiltbrunner, Jürg; Vida, Gyula; Mikó, Péter

    2016-06-01

    An assessment was previously made of the effects of organic and low-input field management systems on the physical, grain compositional and processing quality of wheat and on the performance of varieties developed using different breeding methods ("Comparison of quality parameters of wheat varieties with different breeding origin under organic and low-input conventional conditions" [1]). Here, accompanying data are provided on the performance and stability analysis of the genotypes using the coefficient of variation and the 'ranking' and 'which-won-where' plots of GGE biplot analysis for the most important quality traits. Broad-sense heritability was also evaluated and is given for the most important physical and quality properties of the seed in organic and low-input management systems, while mean values and standard deviation of the studied properties are presented separately for organic and low-input fields.

  13. Noble-TLBO MPPT Technique and its Comparative Analysis with Conventional methods implemented on Solar Photo Voltaic System

    NASA Astrophysics Data System (ADS)

    Patsariya, Ajay; Rai, Shiwani; Kumar, Yogendra, Dr.; Kirar, Mukesh, Dr.

    2017-08-01

    The energy crisis particularly with developing GDPs, has bring up to a new panorama of sustainable power source like solar energy, which has encountered huge development. Progressively high infiltration level of photovoltaic (PV) era emerges in keen matrix. Sunlight based power is irregular and variable, as the sun based source at the ground level is exceedingly subject to overcast cover inconstancy, environmental vaporized levels, and other climate parameters. The inalienable inconstancy of substantial scale sun based era acquaints huge difficulties with keen lattice vitality administration. Exact determining of sun powered power/irradiance is basic to secure financial operation of the shrewd framework. In this paper a noble TLBO-MPPT technique has been proposed to address the vitality of solar energy. A comparative analysis has been presented between conventional PO, IC and the proposed MPPT technique. The research has been done on Matlab Simulink software version 2013.

  14. Multivariable Analysis of Gluten-Free Pasta Elaborated with Non-Conventional Flours Based on the Phenolic Profile, Antioxidant Capacity and Color.

    PubMed

    Camelo-Méndez, Gustavo A; Flores-Silva, Pamela C; Agama-Acevedo, Edith; Bello-Pérez, Luis A

    2017-12-01

    The phenolic compounds, color and antioxidant capacity of gluten-free pasta prepared with non-conventional flours such as chickpea (CHF), unripe plantain (UPF), white maize (WMF) and blue maize (BMF) were analyzed. Fifteen phenolic compounds (five anthocyanins, five hydroxybenzoic acids, three hydroxycinnamic acids, one hydroxyphenylacetic acid and one flavonol) were identified in pasta prepared with blue maize, and 10 compounds were identified for samples prepared with white maize. The principal component analysis (PCA) led to results describing 98% of the total variance establishing a clear separation for each pasta. Both the proportion (25, 50 and 75%) and type of maize flour (white and blue) affected the color parameters (L*, C ab *, h ab and ΔE* ab ) and antioxidant properties (DPPH, ABTS and FRAP methods) of samples, thus producing gluten-free products with potential health benefits intended for general consumers (including the population with celiac disease).

  15. Associations Between PET Textural Features and GLUT1 Expression, and the Prognostic Significance of Textural Features in Lung Adenocarcinoma.

    PubMed

    Koh, Young Wha; Park, Seong Yong; Hyun, Seung Hyup; Lee, Su Jin

    2018-02-01

    We evaluated the association between positron emission tomography (PET) textural features and glucose transporter 1 (GLUT1) expression level and further investigated the prognostic significance of textural features in lung adenocarcinoma. We evaluated 105 adenocarcinoma patients. We extracted texture-based PET parameters of primary tumors. Conventional PET parameters were also measured. The relationships between PET parameters and GLUT1 expression levels were evaluated. The association between PET parameters and overall survival (OS) was assessed using Cox's proportional hazard regression models. In terms of PET textural features, tumors expressing high levels of GLUT1 exhibited significantly lower coarseness, contrast, complexity, and strength, but significantly higher busyness. On univariate analysis, the metabolic tumor volume, total lesion glycolysis, contrast, busyness, complexity, and strength were significant predictors of OS. Multivariate analysis showed that lower complexity (HR=2.017, 95%CI=1.032-3.942, p=0.040) was independently associated with poorer survival. PET textural features may aid risk stratification in lung adenocarcinoma patients. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  16. Fatigue stipulation of bulk-fill composites: An in vitro appraisal.

    PubMed

    Vidhawan, Shruti A; Yap, Adrian U; Ornaghi, Barbara P; Banas, Agnieszka; Banas, Krzysztof; Neo, Jennifer C; Pfeifer, Carmem S; Rosa, Vinicius

    2015-09-01

    The aim of this study was to determine the Weibull and slow crack growth (SCG) parameters of bulk-fill resin based composites. The strength degradation over time of the materials was also assessed by strength-probability-time (SPT) analysis. Three bulk-fill [Tetric EvoCeram Bulk Fill (TBF); X-tra fil (XTR); Filtek Bulk-fill flowable (BFL)] and a conventional one [Filtek Z250 (Z250)] were studied. Seventy five disk-shaped specimens (12mm in diameter and 1mm thick) were prepared by inserting the uncured composites in a stainless steel split mold followed by photoactivation (1200mW/cm(2)/20s) and storage in distilled water (37°C/24h). Degree of conversion was evaluated in five specimens by analysis of FT-IR spectra obtained in the mid-IR region. The SCG parameters n (stress corrosion susceptibility coefficient) and σf0 (scaling parameter) were obtained by testing ten specimens in each of the five stress rates: 10(-2), 10(-1), 10(0), 10(1) and 10(2)MPa/s using a piston-on-three-balls device. Weibull parameter m (Weibull modulus) and σf0 (characteristic strength) were obtained by testing additional 20 specimens at 1MPa/s. Strength-probability-time (SPT) diagrams were constructed by merging SCG and Weibull parameters. BFL and TBF presented higher n values, respectively (40.1 and 25.5). Z250 showed the highest (157.02MPa) and TBF the lowest (110.90MPa) σf0 value. Weibull analysis showed m (Weibull modulus) of 9.7, 8.6, 9.7 and 8.9 for TBF, BFL, XTR and Z250, respectively. SPT diagram for 5% probability of failure showed strength decrease of 18% for BFL, 25% for TBF, 32% for XTR and 36% for Z250, respectively, after 5 years as compared to 1 year. The reliability and decadence of strength over time for bulk-fill resin composites studied are, at least, comparable to conventional composites. BFL shows the highest fatigue resistance under all simulations followed by TBF, while XTR was at par with Z250. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  17. Effect of Frequent or Extended Hemodialysis on Cardiovascular Parameters: A Meta-analysis

    PubMed Central

    Susantitaphong, Paweena; Koulouridis, Ioannis; Balk, Ethan M.; Madias, Nicolaos E.; Jaber, Bertrand L.

    2012-01-01

    Background Increased left ventricular (LV) mass is a risk factor for cardiovascular mortality in patients with chronic kidney failure. More frequent or extended hemodialysis (HD) has been hypothesized to have a beneficial effect on LV mass. Study Design Meta-analysis. Setting & Population MEDLINE literature search (inception-April 2011), Cochrane Central Register of Controlled Trials and ClinicalTrials.gov using the search terms “short daily HD”, “daily HD”, “quotidian HD”, “frequent HD”, “intensive HD”, “nocturnal HD”, and “home HD”. Selection Criteria for Studies Single-arm cohort studies (with pre- and post-study evaluations) and randomized controlled trials examining the effect of frequent or extended HD on cardiac morphology and function, and blood pressure parameters. Studies of hemofiltration, hemodiafiltration and peritoneal dialysis were excluded. Intervention Frequent (2–8 hours,> thrice weekly) or extended (>4 hours, thrice weekly) HD as compared with conventional (≤ 4 hours, thrice weekly) HD. Outcomes Absolute changes in cardiac morphology and function, including LV mass index (LVMI) (primary), and blood pressure parameters (secondary). Results We identified 38 single-arm studies, 5 crossover trials and 3 randomized controlled trials. By meta-analysis of 23 study arms, frequent or extended HD significantly reduced LVMI from baseline (−31.2 g/m2, 95% CI, −39.8 to −22.5; P<0.001).The 3 randomized trials found a less pronounced net reduction in LVMI (−7.0 g/m2; 95% CI, −10.2 to −3.7; P<0.001). LV ejection fraction improved by 6.7% (95% CI, 1.6 to 11.9; P=0.01). Other cardiac morphological parameters displayed similar improvements. There were also significant decreases in systolic, diastolic, and mean blood pressure, and mean number of anti-hypertensive medications. Limitations Paucity of randomized controlled trials. Conclusions Conversion from conventional to frequent or extended HD is associated with an improvement in cardiac morphology and function, including LVMI and LV ejection fraction, respectively, and in several blood pressure parameters, which collectively might confer long-term cardiovascular benefit. Trials with long-term clinical outcomes are needed. PMID:22370022

  18. Estimating the Geocenter from GNSS Observations

    NASA Astrophysics Data System (ADS)

    Dach, Rolf; Michael, Meindl; Beutler, Gerhard; Schaer, Stefan; Lutz, Simon; Jäggi, Adrian

    2014-05-01

    The satellites of the Global Navigation Satellite Systems (GNSS) are orbiting the Earth according to the laws of celestial mechanics. As a consequence, the satellites are sensitive to the coordinates of the center of mass of the Earth. The coordinates of the (ground) tracking stations are referring to the center of figure as the conventional origin of the reference frame. The difference between the center of mass and center of figure is the instantaneous geocenter. Following this definition the global GNSS solutions are sensitive to the geocenter. Several studies demonstrated strong correlations of the GNSS-derived geocenter coordinates with parameters intended to absorb radiation pressure effects acting on the GNSS satellites, and with GNSS satellite clock parameters. One should thus pose the question to what extent these satellite-related parameters absorb (or hide) the geocenter information. A clean simulation study has been performed to answer this question. The simulation environment allows it in particular to introduce user-defined shifts of the geocenter (systematic inconsistencies between the satellite's and station's reference frames). These geocenter shifts may be recovered by the mentioned parameters - provided they were set up in the analysis. If the geocenter coordinates are not estimated, one may find out which other parameters absorb the user-defined shifts of the geocenter and to what extent. Furthermore, the simulation environment also allows it to extract the correlation matrix from the a posteriori covariance matrix to study the correlations between different parameter types of the GNSS analysis system. Our results show high degrees of correlations between geocenter coordinates, orbit-related parameters, and satellite clock parameters. These correlations are of the same order of magnitude as the correlations between station heights, troposphere, and receiver clock parameters in each regional or global GNSS network analysis. If such correlations are accepted in a GNSS analysis when estimating station coordinates, geocenter coordinates must be considered as mathematically estimable in a global GNSS analysis. The geophysical interpretation may of course become difficult, e.g., if insufficient orbit models are used.

  19. Matrix-free and material-enhanced laser desorption/ionization mass spectrometry for the analysis of low molecular weight compounds.

    PubMed

    Rainer, Matthias; Qureshi, Muhammad Nasimullah; Bonn, Günther Karl

    2011-06-01

    The application of matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS) for the analysis of low molecular weight (LMW) compounds, such as pharmacologically active constituents or metabolites, is usually hampered by employing conventional MALDI matrices owing to interferences caused by matrix molecules below 700 Da. As a consequence, interpretation of mass spectra remains challenging, although matrix suppression can be achieved under certain conditions. Unlike the conventional MALDI methods which usually suffer from background signals, matrix-free techniques have become more and more popular for the analysis of LMW compounds. In this review we describe recently introduced materials for laser desorption/ionization (LDI) as alternatives to conventionally applied MALDI matrices. In particular, we want to highlight a new method for LDI which is referred to as matrix-free material-enhanced LDI (MELDI). In matrix-free MELDI it could be clearly shown, that besides chemical functionalities, the material's morphology plays a crucial role regarding energy-transfer capabilities. Therefore, it is of great interest to also investigate parameters such as particle size and porosity to study their impact on the LDI process. Especially nanomaterials such as diamond-like carbon, C(60) fullerenes and nanoparticulate silica beads were found to be excellent energy-absorbing materials in matrix-free MELDI.

  20. Quantitative analysis of the anti-noise performance of an m-sequence in an electromagnetic method

    NASA Astrophysics Data System (ADS)

    Yuan, Zhe; Zhang, Yiming; Zheng, Qijia

    2018-02-01

    An electromagnetic method with a transmitted waveform coded by an m-sequence achieved better anti-noise performance compared to the conventional manner with a square-wave. The anti-noise performance of the m-sequence varied with multiple coding parameters; hence, a quantitative analysis of the anti-noise performance for m-sequences with different coding parameters was required to optimize them. This paper proposes the concept of an identification system, with the identified Earth impulse response obtained by measuring the system output with the input of the voltage response. A quantitative analysis of the anti-noise performance of the m-sequence was achieved by analyzing the amplitude-frequency response of the corresponding identification system. The effects of the coding parameters on the anti-noise performance are summarized by numerical simulation, and their optimization is further discussed in our conclusions; the validity of the conclusions is further verified by field experiment. The quantitative analysis method proposed in this paper provides a new insight into the anti-noise mechanism of the m-sequence, and could be used to evaluate the anti-noise performance of artificial sources in other time-domain exploration methods, such as the seismic method.

  1. Quantum noise in SIS mixers

    NASA Astrophysics Data System (ADS)

    Zorin, A. B.

    1985-03-01

    In the present, quantum-statistical analysis of SIS heterodyne mixer performance, the conventional three-port model of the mixer circuit and the microscopic theory of superconducting tunnel junctions are used to derive a general expression for a noise parameter previously used for the case of parametric amplifiers. This expression is numerically evaluated for various quasiparticle current step widths, dc bias voltages, local oscillator powers, signal frequencies, signal source admittances, and operation temperatures.

  2. Swelling-induced optical anisotropy of thermoresponsive hydrogels based on poly(2-(2-methoxyethoxy)ethyl methacrylate): deswelling kinetics probed by quantitative Mueller matrix polarimetry.

    PubMed

    Patil, Nagaraj; Soni, Jalpa; Ghosh, Nirmalya; De, Priyadarsi

    2012-11-29

    Thermodynamically favored polymer-water interactions below the lower critical solution temperature (LCST) caused swelling-induced optical anisotropy (linear retardance) of thermoresponsive hydrogels based on poly(2-(2-methoxyethoxy)ethyl methacrylate). This was exploited to study the macroscopic deswelling kinetics quantitatively by a generalized polarimetry analysis method, based on measurement of the Mueller matrix and its subsequent inverse analysis via the polar decomposition approach. The derived medium polarization parameters, namely, linear retardance (δ), diattenuation (d), and depolarization coefficient (Δ), of the hydrogels showed interesting differences between the gels prepared by conventional free radical polymerization (FRP) and reversible addition-fragmentation chain transfer polymerization (RAFT) and also between dry and swollen state. The effect of temperature, cross-linking density, and polymerization technique employed to synthesize hydrogel on deswelling kinetics was systematically studied via conventional gravimetry and corroborated further with the corresponding Mueller matrix derived quantitative polarimetry characteristics (δ, d, and Δ). The RAFT gels exhibited higher swelling ratio and swelling-induced optical anisotropy compared to FRP gels and also deswelled faster at 30 °C. On the contrary, at 45 °C, deswelling was significantly retarded for the RAFT gels due to formation of a skin layer, which was confirmed and quantified via the enhanced diattenuation and depolarization parameters.

  3. A novel automatic quantification method for high-content screening analysis of DNA double strand-break response.

    PubMed

    Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming

    2017-08-29

    High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.

  4. Development of hazard-compatible building fragility and vulnerability models

    USGS Publications Warehouse

    Karaca, E.; Luco, N.

    2008-01-01

    We present a methodology for transforming the structural and non-structural fragility functions in HAZUS into a format that is compatible with conventional seismic hazard analysis information. The methodology makes use of the building capacity (or pushover) curves and related building parameters provided in HAZUS. Instead of the capacity spectrum method applied in HAZUS, building response is estimated by inelastic response history analysis of corresponding single-degree-of-freedom systems under a large number of earthquake records. Statistics of the building response are used with the damage state definitions from HAZUS to derive fragility models conditioned on spectral acceleration values. Using the developed fragility models for structural and nonstructural building components, with corresponding damage state loss ratios from HAZUS, we also derive building vulnerability models relating spectral acceleration to repair costs. Whereas in HAZUS the structural and nonstructural damage states are treated as if they are independent, our vulnerability models are derived assuming "complete" nonstructural damage whenever the structural damage state is complete. We show the effects of considering this dependence on the final vulnerability models. The use of spectral acceleration (at selected vibration periods) as the ground motion intensity parameter, coupled with the careful treatment of uncertainty, makes the new fragility and vulnerability models compatible with conventional seismic hazard curves and hence useful for extensions to probabilistic damage and loss assessment.

  5. Mathematical modeling of the Stirling engine in terms of applying the composition of the power complex containing non-conventional and renewable energy

    NASA Astrophysics Data System (ADS)

    Gaponenko, A. M.; Kagramanova, A. A.

    2017-11-01

    The opportunity of application of Stirling engine with non-conventional and renewable sources of energy. The advantage of such use. The resulting expression for the thermal efficiency of the Stirling engine. It is shown that the work per cycle is proportional to the quantity of matter, and hence the pressure of the working fluid, the temperature difference and, to a lesser extent, depends on the expansion coefficient; efficiency of ideal Stirling cycle coincides with the efficiency of an ideal engine working on the Carnot cycle, which distinguishes a Stirling cycle from the cycles of Otto and Diesel underlying engine. It has been established that the four input parameters, the only parameter which can be easily changed during operation, and which effectively affects the operation of the engine is the phase difference. Dependence of work per cycle of the phase difference, called the phase characteristic, visually illustrates mode of operation of Stirling engine. The mathematical model of the cycle of Schmidt and the analysis of operation of Stirling engine in the approach of Schmidt with the aid of numerical analysis. To conduct numerical experiments designed program feature in the language MathLab. The results of numerical experiments are illustrated by graphical charts.

  6. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  7. Analysis of Size Correlations for Microdroplets Produced by Ultrasonic Atomization

    PubMed Central

    Barba, Anna Angela; d'Amore, Matteo

    2013-01-01

    Microencapsulation techniques are widely applied in the field of pharmaceutical production to control drugs release in time and in physiological environments. Ultrasonic-assisted atomization is a new technique to produce microencapsulated systems by a mechanical approach. Interest in this technique is due to the advantages evidenceable (low level of mechanical stress in materials, reduced energy request, reduced apparatuses size) when comparing it to more conventional techniques. In this paper, the groundwork of atomization is introduced, the role of relevant parameters in ultrasonic atomization mechanism is discussed, and correlations to predict droplets size starting from process parameters and material properties are presented and tested. PMID:24501580

  8. Impact of commercial housing systems and nutrient and energy intake on laying hen performance and egg quality parameters1

    PubMed Central

    Karcher, D. M.; Jones, D. R.; Abdo, Z.; Zhao, Y.; Shepherd, T. A.; Xin, H.

    2015-01-01

    The US egg industry is exploring alternative housing systems for laying hens. However, limited published research related to cage-free aviary systems and enriched colony cages exists related to production, egg quality, and hen nutrition. The laying hen's nutritional requirements and resulting productivity are well established with the conventional cage system, but diminutive research is available in regards to alternative housing systems. The restrictions exist with limited availability of alternative housing systems in research settings and the considerable expense for increased bird numbers in a replicate due to alternative housing system design. Therefore, the objective of the current study was to evaluate the impact of nutrient and energy intake on production and egg quality parameters from laying hens housed at a commercial facility. Lohmann LSL laying hens were housed in three systems: enriched colony cage, cage-free aviary, and conventional cage at a single commercial facility. Daily production records were collected along with dietary changes during 15 production periods (28-d each). Eggs were analyzed for shell strength, shell thickness, Haugh unit, vitelline membrane properties, and egg solids each period. An analysis of covariance (ANCOVA) coupled with a principal components analysis (PCA) approach was utilized to assess the impact of nutritional changes on production parameters and monitored egg quality factors. The traits of hen-day production and mortality had a response only in the PCA 2 direction. This finds that as house temperature and Met intake increases, there is an inflection point at which hen-day egg production is negatively effected. Dietary changes more directly influenced shell parameters, vitelline membrane parameters, and egg total solids as opposed to laying hen housing system. Therefore, further research needs to be conducted in controlled research settings on laying hen nutrient and energy intake in the alternative housing systems and resulting impact on egg quality measures. PMID:25630672

  9. Impact of commercial housing systems and nutrient and energy intake on laying hen performance and egg quality parameters.

    PubMed

    Karcher, D M; Jones, D R; Abdo, Z; Zhao, Y; Shepherd, T A; Xin, H

    2015-03-01

    The US egg industry is exploring alternative housing systems for laying hens. However, limited published research related to cage-free aviary systems and enriched colony cages exists related to production, egg quality, and hen nutrition. The laying hen's nutritional requirements and resulting productivity are well established with the conventional cage system, but diminutive research is available in regards to alternative housing systems. The restrictions exist with limited availability of alternative housing systems in research settings and the considerable expense for increased bird numbers in a replicate due to alternative housing system design. Therefore, the objective of the current study was to evaluate the impact of nutrient and energy intake on production and egg quality parameters from laying hens housed at a commercial facility. Lohmann LSL laying hens were housed in three systems: enriched colony cage, cage-free aviary, and conventional cage at a single commercial facility. Daily production records were collected along with dietary changes during 15 production periods (28-d each). Eggs were analyzed for shell strength, shell thickness, Haugh unit, vitelline membrane properties, and egg solids each period. An analysis of covariance (ANCOVA) coupled with a principal components analysis (PCA) approach was utilized to assess the impact of nutritional changes on production parameters and monitored egg quality factors. The traits of hen-day production and mortality had a response only in the PCA 2 direction. This finds that as house temperature and Met intake increases, there is an inflection point at which hen-day egg production is negatively effected. Dietary changes more directly influenced shell parameters, vitelline membrane parameters, and egg total solids as opposed to laying hen housing system. Therefore, further research needs to be conducted in controlled research settings on laying hen nutrient and energy intake in the alternative housing systems and resulting impact on egg quality measures. © The Author 2015. Published by Oxford University Press on behalf of Poultry Science Association.

  10. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    PubMed

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  11. Analysis of light emitting diode array lighting system based on human vision: normal and abnormal uniformity condition.

    PubMed

    Qin, Zong; Ji, Chuangang; Wang, Kai; Liu, Sheng

    2012-10-08

    In this paper, condition for uniform lighting generated by light emitting diode (LED) array was systematically studied. To take human vision effect into consideration, contrast sensitivity function (CSF) was novelly adopted as critical criterion for uniform lighting instead of conventionally used Sparrow's Criterion (SC). Through CSF method, design parameters including system thickness, LED pitch, LED's spatial radiation distribution and viewing condition can be analytically combined. In a specific LED array lighting system (LALS) with foursquare LED arrangement, different types of LEDs (Lambertian and Batwing type) and given viewing condition, optimum system thicknesses and LED pitches were calculated and compared with those got through SC method. Results show that CSF method can achieve more appropriate optimum parameters than SC method. Additionally, an abnormal phenomenon that uniformity varies with structural parameters non-monotonically in LALS with non-Lambertian LEDs was found and analyzed. Based on the analysis, a design method of LALS that can bring about better practicability, lower cost and more attractive appearance was summarized.

  12. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  13. The effect of texture on the shaft surface on the sealing performance of radial lip seals

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Jia, XiaoHong; Gao, Zhi; Wang, YuMing

    2014-07-01

    On the basis of elastohydrodynamic model, the present study numerically analyzes the effect of various microdimple texture shapes, namely, circular, square, oriented isosceles triangular, on the pumping rate and the friction torque of radial lip seals, and determines the microdimple texture shape that can produce positive pumping rate. The area ratio, depth and shape dimension of a single texture are the most important geometric parameters which influence the tribological performance. According to the selected texture shape, parameter analysis is conducted to determine the optimal combination for the above three parameters. Simultaneously, the simulated performances of radial lip seal with texture on the shaft surface are compared with those of the conventional lip seal without any texture on the shaft surface.

  14. System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.

    2011-01-01

    Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed

  15. Fast Spatial Resolution Analysis of Quadratic Penalized Least-Squares Image Reconstruction With Separate Real and Imaginary Roughness Penalty: Application to fMRI.

    PubMed

    Olafsson, Valur T; Noll, Douglas C; Fessler, Jeffrey A

    2018-02-01

    Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.

  16. Novel technique for ST-T interval characterization in patients with acute myocardial ischemia.

    PubMed

    Correa, Raúl; Arini, Pedro David; Correa, Lorena Sabrina; Valentinuzzi, Max; Laciar, Eric

    2014-07-01

    The novel signal processing techniques have allowed and improved the use of vectorcardiography (VCG) to diagnose and characterize myocardial ischemia. Herein, we studied vectorcardiographic dynamic changes of ventricular repolarization in 80 patients before (control) and during Percutaneous Transluminal Coronary Angioplasty (PTCA). We propose four vectorcardiographic ST-T parameters, i.e., (a) ST Vector Magnitude Area (aSTVM); (b) T-wave Vector Magnitude Area (aTVM); (c) ST-T Vector Magnitude Difference (ST-TVD), and (d) T-wave Vector Magnitude Difference (TVD). For comparison, the conventional ST-Change Vector Magnitude (STCVM) and Spatial Ventricular Gradient (SVG) were also calculated. Our results indicate that several vectorcardiographic parameters show significant differences (p-value<0.05) before starting and during PTCA. Statistical minute-by-minute PTCA comparison against the control situation showed that ischemic monitoring reached a sensitivity=90.5% and a specificity=92.6% at the 5th minute of the PTCA, when aSTVM and ST-TVD were used as classifiers. We conclude that the sensitivity and specificity for acute ischemia monitoring could be increased with the use of only two vectorcardiographic parameters. Hence, the proposed technique based on vectorcardiography could be used in addition to the conventional ST-T analysis for better monitoring of ischemic patients. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Validated green high-performance liquid chromatographic methods for the determination of coformulated pharmaceuticals: a comparison with reported conventional methods.

    PubMed

    Elzanfaly, Eman S; Hegazy, Maha A; Saad, Samah S; Salem, Maissa Y; Abd El Fattah, Laila E

    2015-03-01

    The introduction of sustainable development concepts to analytical laboratories has recently gained interest, however, most conventional high-performance liquid chromatography methods do not consider either the effect of the used chemicals or the amount of produced waste on the environment. The aim of this work was to prove that conventional methods can be replaced by greener ones with the same analytical parameters. The suggested methods were designed so that they neither use nor produce harmful chemicals and produce minimum waste to be used in routine analysis without harming the environment. This was achieved by using green mobile phases and short run times. Four mixtures were chosen as models for this study; clidinium bromide/chlordiazepoxide hydrochloride, phenobarbitone/pipenzolate bromide, mebeverine hydrochloride/sulpiride, and chlorphenoxamine hydrochloride/caffeine/8-chlorotheophylline either in their bulk powder or in their dosage forms. The methods were validated with respect to linearity, precision, accuracy, system suitability, and robustness. The developed methods were compared to the reported conventional high-performance liquid chromatography methods regarding their greenness profile. The suggested methods were found to be greener and more time- and solvent-saving than the reported ones; hence they can be used for routine analysis of the studied mixtures without harming the environment. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Filtering out signals of gauge-mediated supersymmetry breaking: Can we always eliminate conventional supersymmetric effects?

    NASA Astrophysics Data System (ADS)

    Mukhopadhyaya, Biswarup; Roy, Sourov

    1998-06-01

    We investigate the signal γγ+E/ in a high-energy linear e+e- collider, with a view to differentiating between gauge-mediated supersymmetry breaking and the conventional supersymmetric models. Prima facie, there is considerable chance of confusion between the two scenarios if the assumption of gaugino mass unification is relaxed. We show that the use of polarized electron beams enables one to distinguish between the two schemes in most cases. There are some regions in the parameter space where this idea does not work, and we suggest some additional methods of distinction. We also perform an analysis of some signals in the gauge-mediated model, coming from the pair production of the second-lightest neutralino.

  19. Superficial Burn Wound Healing with Intermittent Negative Pressure Wound Therapy Under Limited Access and Conventional Dressings

    PubMed Central

    Honnegowda, Thittamaranahalli Muguregowda; Padmanabha Udupa, Echalasara Govindarama; Rao, Pragna; Kumar, Pramod; Singh, Rekha

    2016-01-01

    BACKGROUND Thermal injury is associated with several biochemical and histopathological alteration in tissue. Analysis of these objective parameters in research and clinical field are common to determine healing rate of burn wound. Negative pressure wound therapy has been achieved wide success in treating chronic wounds. This study determines superficial burn wound healing with intermittent negative pressure wound therapy under limited access and conventional dressings METHODS A total 50 patients were randomised into two equal groups: limited access and conventional dressing groups. Selective biochemical parameters such as hydroxyproline, hexosamine, total protein, and antioxidants, malondialdhyde (MDA), wound surface pH, matrix metalloproteinase-2 (MMP-2), and nitric oxide (NO) were measured in the granulation tissue. Histopathologically, necrotic tissue, amount of inflammatory infiltrate, angiogenesis and extracellular matrix deposition (ECM) were studied to determine wound healing under intermittent negative pressure. RESULTS Patients treated with limited access have shown significant increase in the mean hydroxyproline, hexosamine, total protein, reduced glutathione (GSH), glutathione peroxidase (GPx), and decrease in MDA, MMP-2, wound surface pH, and NO. Histopathologic study showed that there was a significant difference after 10 days of treatment between limited access vs conventional dressing group, Median (Q1, Q3)=3 (2, 4.25) vs 2 (1.75, 4). CONCLUSION Limited access was shown to exert its beneficial effects on wound healing by increasing ground substance, antioxidants and reducing MMP-2 activity, MDA, NO and providing optimal pH, decreasing necrotic tissue, amount of inflammatory infiltrate, increasing ECM deposition and angiogenesis. PMID:27853690

  20. Computer-aided sperm analysis: a useful tool to evaluate patient's response to varicocelectomy.

    PubMed

    Ariagno, Julia I; Mendeluk, Gabriela R; Furlan, María J; Sardi, M; Chenlo, P; Curi, Susana M; Pugliese, Mercedes N; Repetto, Herberto E; Cohen, Mariano

    2017-01-01

    Preoperative and postoperative sperm parameter values from infertile men with varicocele were analyzed by computer-aided sperm analysis (CASA) to assess if sperm characteristics improved after varicocelectomy. Semen samples of men with proven fertility (n = 38) and men with varicocele-related infertility (n = 61) were also analyzed. Conventional semen analysis was performed according to WHO (2010) criteria and a CASA system was employed to assess kinetic parameters and sperm concentration. Seminal parameters values in the fertile group were very far above from those of the patients, either before or after surgery. No significant improvement in the percentage normal sperm morphology (P = 0.10), sperm concentration (P = 0.52), total sperm count (P = 0.76), subjective motility (%) (P = 0.97) nor kinematics (P = 0.30) was observed after varicocelectomy when all groups were compared. Neither was significant improvement found in percentage normal sperm morphology (P = 0.91), sperm concentration (P = 0.10), total sperm count (P = 0.89) or percentage motility (P = 0.77) after varicocelectomy in paired comparisons of preoperative and postoperative data. Analysis of paired samples revealed that the total sperm count (P = 0.01) and most sperm kinetic parameters: curvilinear velocity (P = 0.002), straight-line velocity (P = 0.0004), average path velocity (P = 0.0005), linearity (P = 0.02), and wobble (P = 0.006) improved after surgery. CASA offers the potential for accurate quantitative assessment of each patient's response to varicocelectomy.

  1. Determination of morphological parameters of biological cells by analysis of scattered-light distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burger, D.E.

    1979-11-01

    The extraction of morphological parameters from biological cells by analysis of light-scatter patterns is described. A light-scattering measurement system has been designed and constructed that allows one to visually examine and photographically record biological cells or cell models and measure the light-scatter pattern of an individual cell or cell model. Using a laser or conventional illumination, the imaging system consists of a modified microscope with a 35 mm camera attached to record the cell image or light-scatter pattern. Models of biological cells were fabricated. The dynamic range and angular distributions of light scattered from these models was compared to calculatedmore » distributions. Spectrum analysis techniques applied on the light-scatter data give the sought after morphological cell parameters. These results compared favorably to shape parameters of the fabricated cell models confirming the mathematical model procedure. For nucleated biological material, correct nuclear and cell eccentricity as well as the nuclear and cytoplasmic diameters were determined. A method for comparing the flow equivalent of nuclear and cytoplasmic size to the actual dimensions is shown. This light-scattering experiment provides baseline information for automated cytology. In its present application, it involves correlating average size as measured in flow cytology to the actual dimensions determined from this technique. (ERB)« less

  2. Time-Efficiency Analysis Comparing Digital and Conventional Workflows for Implant Crowns: A Prospective Clinical Crossover Trial.

    PubMed

    Joda, Tim; Brägger, Urs

    2015-01-01

    To compare time-efficiency in the production of implant crowns using a digital workflow versus the conventional pathway. This prospective clinical study used a crossover design that included 20 study participants receiving single-tooth replacements in posterior sites. Each patient received a customized titanium abutment plus a computer-aided design/computer-assisted manufacture (CAD/CAM) zirconia suprastructure (for those in the test group, using digital workflow) and a standardized titanium abutment plus a porcelain-fused-to-metal crown (for those in the control group, using a conventional pathway). The start of the implant prosthetic treatment was established as the baseline. Time-efficiency analysis was defined as the primary outcome, and was measured for every single clinical and laboratory work step in minutes. Statistical analysis was calculated with the Wilcoxon rank sum test. All crowns could be provided within two clinical appointments, independent of the manufacturing process. The mean total production time, as the sum of clinical plus laboratory work steps, was significantly different. The mean ± standard deviation (SD) time was 185.4 ± 17.9 minutes for the digital workflow process and 223.0 ± 26.2 minutes for the conventional pathway (P = .0001). Therefore, digital processing for overall treatment was 16% faster. Detailed analysis for the clinical treatment revealed a significantly reduced mean ± SD chair time of 27.3 ± 3.4 minutes for the test group compared with 33.2 ± 4.9 minutes for the control group (P = .0001). Similar results were found for the mean laboratory work time, with a significant decrease of 158.1 ± 17.2 minutes for the test group vs 189.8 ± 25.3 minutes for the control group (P = .0001). Only a few studies have investigated efficiency parameters of digital workflows compared with conventional pathways in implant dental medicine. This investigation shows that the digital workflow seems to be more time-efficient than the established conventional production pathway for fixed implant-supported crowns. Both clinical chair time and laboratory manufacturing steps could be effectively shortened with the digital process of intraoral scanning plus CAD/CAM technology.

  3. Guide to the economic analysis of community energy systems

    NASA Astrophysics Data System (ADS)

    Pferdehirt, W. P.; Croke, K. G.; Hurter, A. P.; Kennedy, A. S.; Lee, C.

    1981-08-01

    This guidebook provides a framework for the economic analysis of community energy systems. The analysis facilitates a comparison of competing configurations in community energy systems, as well as a comparison with conventional energy systems. Various components of costs and revenues to be considered are discussed in detail. Computational procedures and accompanying worksheets are provided for calculating the net present value, straight and discounted payback periods, the rate of return, and the savings to investment ratio for the proposed energy system alternatives. These computations are based on a projection of the system's costs and revenues over its economic lifetimes. The guidebook also discusses the sensitivity of the results of this economic analysis to changes in various parameters and assumptions.

  4. The landing flare: An analysis and flight-test investigation

    NASA Technical Reports Server (NTRS)

    Seckel, E.

    1975-01-01

    Results are given of an extensive investigation of conventional landing flares in general aviation type airplanes. A wide range of parameters influencing flare behavior are simulated in experimental landings in a variable-stability Navion. The most important feature of the flare is found to be the airplane's deceleration in the flare. Various effects on this are correlated in terms of the average flare load factor. Piloting technique is extensively discussed. Design criteria are presented.

  5. A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2017-01-01

    We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.

  6. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jufeng; Xia, Bing; Shang, Yunlong

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  7. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE PAGES

    Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...

    2016-12-22

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  8. Physically-Based Probabilistic Seismic Hazard Analysis Using Broad-Band Ground Motion Simulation: a Case Study for Prince Islands Fault, Marmara Sea

    NASA Astrophysics Data System (ADS)

    Mert, A.

    2016-12-01

    The main motivation of this study is the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in Marmara Sea and the disaster risk around Marmara region, especially in İstanbul. This study provides the results of a physically-based Probabilistic Seismic Hazard Analysis (PSHA) methodology, using broad-band strong ground motion simulations, for sites within the Marmara region, Turkey, due to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically-based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We include the effects of all considerable magnitude earthquakes. To generate the high frequency (0.5-20 Hz) part of the broadband earthquake simulation, the real small magnitude earthquakes recorded by local seismic array are used as an Empirical Green's Functions (EGF). For the frequencies below 0.5 Hz the simulations are obtained using by Synthetic Green's Functions (SGF) which are synthetic seismograms calculated by an explicit 2D/3D elastic finite difference wave propagation routine. Using by a range of rupture scenarios for all considerable magnitude earthquakes throughout the PIF segments we provide a hazard calculation for frequencies 0.1-20 Hz. Physically based PSHA used here follows the same procedure of conventional PSHA except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes and this approach utilizes full rupture of earthquakes along faults. Further, conventional PSHA predicts ground-motion parameters using by empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitude earthquakes to obtain ground-motion parameters. PSHA results are produced for 2%, 10% and 50% hazards for all studied sites in Marmara Region.

  9. Sleep Stage Transition Dynamics Reveal Specific Stage 2 Vulnerability in Insomnia.

    PubMed

    Wei, Yishul; Colombo, Michele A; Ramautar, Jennifer R; Blanken, Tessa F; van der Werf, Ysbrand D; Spiegelhalder, Kai; Feige, Bernd; Riemann, Dieter; Van Someren, Eus J W

    2017-09-01

    Objective sleep impairments in insomnia disorder (ID) are insufficiently understood. The present study evaluated whether whole-night sleep stage dynamics derived from polysomnography (PSG) differ between people with ID and matched controls and whether sleep stage dynamic features discriminate them better than conventional sleep parameters. Eighty-eight participants aged 21-70 years, including 46 with ID and 42 age- and sex-matched controls without sleep complaints, were recruited through www.sleepregistry.nl and completed two nights of laboratory PSG. Data of 100 people with ID and 100 age- and sex-matched controls from a previously reported study were used to validate the generalizability of findings. The second night was used to obtain, in addition to conventional sleep parameters, probabilities of transitions between stages and bout duration distributions of each stage. Group differences were evaluated with nonparametric tests. People with ID showed higher empirical probabilities to transition from stage N2 to the lighter sleep stage N1 or wakefulness and a faster decaying stage N2 bout survival function. The increased transition probability from stage N2 to stage N1 discriminated people with ID better than any of their deviations in conventional sleep parameters, including less total sleep time, less sleep efficiency, more stage N1, and more wake after sleep onset. Moreover, adding this transition probability significantly improved the discriminating power of a multiple logistic regression model based on conventional sleep parameters. Quantification of sleep stage dynamics revealed a particular vulnerability of stage N2 in insomnia. The feature characterizes insomnia better than-and independently of-any conventional sleep parameter. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  10. Transumbilical Thoracoscopy Versus Conventional Thoracoscopy for Lung Wedge Resection: Safety and Efficacy in a Live Canine Model.

    PubMed

    Chen, Tzu-Ping; Yen-Chu; Wu, Yi-Cheng; Yeh, Chi-Ju; Liu, Chien-Ying; Hsieh, Ming-Ju; Yuan, Hsu-Chia; Ko, Po-Jen; Liu, Yun-Hen

    2015-12-01

    Transumbilical single-port surgery has been associated with less postoperative pain and offers better cosmetic outcomes than conventional 3-port laparoscopic surgery. This study compares the safety and efficacy of transumbilical thoracoscopy and conventional thoracoscopy for lung wedge resection. The animals (n = 16) were randomly assigned to the transumbilical thoracoscopic approach group (n = 8) or conventional thoracoscopic approach group (n = 8). Transumbilical lung resection was performed via an umbilical incision and a diaphragmatic incision. In the conventional thoracoscopic group, lung resection was completed through a thoracic incision. For both procedures, we compared the surgical outcomes, for example, operating time and operative complications; physiologic parameters, for example, respiratory rate and body temperature; inflammatory parameters, for example, white blood cell count; and pulmonary parameters, for example, arterial blood gas levels. The animals were euthanized 2 weeks after the surgery for gross and histologic evaluations. The lung wedge resection was successfully performed in all animals. There was no significant difference in the mean operating times or complications between the transumbilical and the conventional thoracoscopic approach groups. With regard to the physiologic impact of the surgeries, the transumbilical approach was associated with significant elevations in body temperature on postoperative day 1, when compared with the standard thoracoscopic approach. This study suggests that both approaches for performing lung wedge resection were comparable in efficacy and postoperative complications. © The Author(s) 2014.

  11. No benefit of patient-specific instrumentation in TKA on functional and gait outcomes: a randomized clinical trial.

    PubMed

    Abdel, Matthew P; Parratte, Sébastien; Blanc, Guillaume; Ollivier, Matthieu; Pomero, Vincent; Viehweger, Elke; Argenson, Jean-Noël A

    2014-08-01

    Although some clinical reports suggest patient-specific instrumentation in TKA may improve alignment, reduce surgical time, and lower hospital costs, it is unknown whether it improves pain- and function-related outcomes and gait. We hypothesized that TKA performed with patient-specific instrumentation would improve patient-reported outcomes measured by validated scoring tools and level gait as ascertained with three-dimensional (3-D) analysis compared with conventional instrumentation 3 months after surgery. We randomized 40 patients into two groups using either patient-specific instrumentation or conventional instrumentation. Patients were evaluated preoperatively and 3 months after surgery. Assessment tools included subjective functional outcome and quality-of-life (QOL) scores using validated questionnaires (New Knee Society Score(©) [KSS], Knee Injury and Osteoarthritis Outcome Score [KOOS], and SF-12). In addition, gait analysis was evaluated with a 3-D system during level walking. The study was powered a priori at 90% to detect a difference in walking speed of 0.1 m/second, which was considered a clinically important difference, and in a post hoc analysis at 80% to detect a difference of 10 points in KSS. There were improvements from preoperatively to 3 months postoperatively in functional scores, QOL, and knee kinematic and kinetic gait parameters during level walking. However, there was no difference between the patient-specific instrumentation and conventional instrumentation groups in KSS, KOOS, SF-12, or 3-D gait parameters. Our observations suggest that patient-specific instrumentation does not confer a substantial advantage in early functional or gait outcomes after TKA. It is possible that differences may emerge, and this study does not allow one to predict any additional variances in the intermediate followup period from 6 months to 1 year postoperatively. However, the goals of the study were to investigate the recovery period as early pain and functional outcomes are becoming increasingly important to patients and surgeons. Level I, therapeutic study. See the Instructions to Authors for a complete description of levels of evidence.

  12. A New Method for Extubation: Comparison between Conventional and New Methods.

    PubMed

    Yousefshahi, Fardin; Barkhordari, Khosro; Movafegh, Ali; Tavakoli, Vida; Paknejad, Omalbanin; Bina, Payvand; Yousefshahi, Hadi; Sheikh Fathollahi, Mahmood

    2012-08-01

    Extubation is associated with the risk of complications such as accumulated secretion above the endotracheal tube cuff, eventual atelectasia following a reduction in pulmonary volumes because of a lack of physiological positive end expiratory pressure, and intra-tracheal suction. In order to reduce these complications, and, based on basic physiological principles, a new practical extubation method is presented in this article. The study was designed as a six-month prospective cross-sectional clinical trial. Two hundred fifty-seven patients undergoing coronary artery bypass grafting (CABG) were divided into two groups based on their scheduled surgery time. The first group underwent the conventional extubation method, while the other group was extubated according to a new described method. Arterial blood gas (ABG) analysis results before and after extubation were compared between the two groups to find the effect of the extubation method on the ABG parameters and the oxygenation profile. In all time intervals, the partial pressure of oxygen in arterial blood / fraction of inspired oxygen (PaO(2)/FiO(2)) ratio in the new method group patients was improved compared to that in the conventional method; some differences, like PaO(2)/FiO(2) four hours after extubation, were statistically significant, however (p value=0.0063). The new extubation method improved some respiratory parameters and thus attenuated oxygenation complications and amplified oxygenation after extubation.

  13. [COMPARISON OF EFFECTIVENESS AND CHANGE OF SAGITTAL SPINO-PELVIC PARAMETERS BETWEEN MINIMALLY INVASIVE TRANSFORAMINAL AND CONVENTIONAL OPEN POSTERIOR LUMBAR INTERBODY FUSIONS IN TREATMENT OF LOW-DEGREE ISTHMIC LUMBAR SPONDYLOLISTHESIS].

    PubMed

    Sun, Xin; Zeng, Rong; Li, Guangsheng; Wei, Bo; Hu, Zibing; Lin, Hao; Chen, Guanghua; Chen, Siyuan; Sun, Jiecong

    2015-12-01

    To compare the effectiveness and changes of sagittal spino-pelvic parameters between minimally invasive transforaminal lumbar interbody fusion and conventional open posterior lumbar interbody fusion in treatment of the low-degree isthmic lumbar spondylolisthesis. Between May 2012 and May 2013, 86 patients with single segmental isthmic lumbar spondylolisthesis (Meyerding degree I or II) were treated by minimally invasive transforaminal lumbar interbody fusion (minimally invasive group) in 39 cases, and by open posterior lumbar interbody fusion in 47 cases (open group). There was no significant difference in gender, age, disease duration, degree of lumbar spondylolisthesis, preoperative visual analogue scale (VAS) score, and Oswestry disability index (ODI) between 2 groups (P>0.05). The following sagittal spino-pelvic parameters were compared between 2 groups before and after operation: the percentage of slipping (PS), intervertebral height, angle of slip (AS), thoracolumbar junction (TLJ), thoracic kyphosis (TK), lumbar lordosis (LL), sagittal vertical axis (SVA), spino-sacral angle (SSA), sacral slope (SS), pelvic tilt (PT), and pelvic incidence (PI). Pearson correlation analysis of the changes between pre- and post-operation was done. Primary healing of incision was obtained in all patients of 2 groups. The postoperative hospital stay of minimally invasive group [(5.1 ± 1.6) days] was significantly shorter than that of open group [(7.2 ± 2.1) days] (t = 2.593, P = 0.017). The patients were followed up 11-20 months (mean, 15 months). The reduction rate was 68.53% ± 20.52% in minimally invasive group, and was 64.21% ± 30.21% in open group, showing no significant difference (t = 0.725, P = 0.093). The back and leg pain VAS scores, and ODI at 3 months after operation were significantly reduced when compared with preoperative ones (P < 0.05), but no significant difference was found between 2 groups (P > 0.05). The postoperative other sagittal spino-pelvic parameters were significantly improved (P < 0.05) except PI (P > 0.05), but there was no significant difference between 2 groups (P > 0.05). The correlation analysis showed that ODI value was related to the SVA, SSA, PT, and LL (P < 0.05). Both minimally invasive transforaminal lumbar interbody fusion and conventional open posterior lumbar interbody fusion can significantly improve the sagittal spino-pelvic parameters in the treatment of low-degree isthmic lumbar spondylolisthesis. The reconstruction of SVA, SSA, PT, and LL are related to the quality of life.

  14. Variation among conventional cultivars could be used as a criterion for environmental safety assessment of Bt rice on nontarget arthropods

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Dang, Cong; Chang, Xuefei; Tian, Junce; Lu, Zengbin; Chen, Yang; Ye, Gongyin

    2017-02-01

    The current difficulty facing risk evaluations of Bacillus thuringiensis (Bt) crops on nontarget arthropods (NTAs) is the lack of criteria for determining what represents unacceptable risk. In this study, we investigated the biological parameters in the laboratory and field population abundance of Nilaparvata lugens (Hemiptera: Delphacidae) on two Bt rice lines and the non-Bt parent, together with 14 other conventional rice cultivars. Significant difference were found in nymphal duration and fecundity of N. lugens fed on Bt rice KMD2, as well as field population density on 12 October, compared with non-Bt parent. However, compared with the variation among conventional rice cultivars, the variation of each parameter between Bt rice and the non-Bt parent was much smaller, which can be easily seen from low-high bar graphs and also the coefficient of variation value (C.V). The variation among conventional cultivars is proposed to be used as a criterion for the safety assessment of Bt rice on NTAs, particularly when statistically significant differences in several parameters are found between Bt rice and its non-Bt parent. Coefficient of variation is suggested as a promising parameter for ecological risk judgement of IRGM rice on NTAs.

  15. Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package

    NASA Astrophysics Data System (ADS)

    Cheng, L.; AghaKouchak, A.; Gilleland, E.

    2013-12-01

    Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.

  16. Radar fall detection using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  17. A user oriented computer program for the analysis of microwave mixers, and a study of the effects of the series inductance and diode capacitance on the performance of some simple mixers

    NASA Technical Reports Server (NTRS)

    Siegel, P. H.; Kerr, A. R.

    1979-01-01

    A user oriented computer program for analyzing microwave and millimeter wave mixers with a single Schottky barrier diode of known I-V and C-V characteristics is described. The program first performs a nonlinear analysis to determine the diode conductance and capacitance waveforms produced by the local oscillator. A small signal linear analysis is then used to find the conversion loss, port impedances, and input noise temperature of the mixer. Thermal noise from the series resistance of the diode and shot noise from the periodically pumped current in the diode conductance are considered. The effects of the series inductance and diode capacitance on the performance of some simple mixer circuits using a conventional Schottky diode, a Schottky diode in which there is no capacitance variation, and a Mott diode are studied. It is shown that the parametric effects of the voltage dependent capacitance of a conventional Schottky diode may be either detrimental or beneficial depending on the diode and circuit parameters.

  18. Generalized modeling of the fractional-order memcapacitor and its character analysis

    NASA Astrophysics Data System (ADS)

    Guo, Zhang; Si, Gangquan; Diao, Lijie; Jia, Lixin; Zhang, Yanbin

    2018-06-01

    Memcapacitor is a new type of memory device generalized from the memristor. This paper proposes a generalized fractional-order memcapacitor model by introducing the fractional calculus into the model. The generalized formulas are studied and the two fractional-order parameter α, β are introduced where α mostly affects the fractional calculus value of charge q within the generalized Ohm's law and β generalizes the state equation which simulates the physical mechanism of a memcapacitor into the fractional sense. This model will be reduced to the conventional memcapacitor as α = 1 , β = 0 and to the conventional memristor as α = 0 , β = 1 . Then the numerical analysis of the fractional-order memcapacitor is studied. And the characteristics and output behaviors of the fractional-order memcapacitor applied with sinusoidal charge are derived. The analysis results have shown that there are four basic v - q and v - i curve patterns when the fractional order α, β respectively equal to 0 or 1, moreover all v - q and v - i curves of the other fractional-order models are transition curves between the four basic patterns.

  19. A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks

    PubMed Central

    Hammad, Karim; El Bakly, Ahmed M.

    2018-01-01

    A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem—subject to various Quality-of-Service (QoS) constraints—represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms. PMID:29509760

  20. A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks.

    PubMed

    Ramadan, Rahab M; Gasser, Safa M; El-Mahallawy, Mohamed S; Hammad, Karim; El Bakly, Ahmed M

    2018-01-01

    A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem-subject to various Quality-of-Service (QoS) constraints-represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms.

  1. Parallel simulations of Grover's algorithm for closest match search in neutron monitor data

    NASA Astrophysics Data System (ADS)

    Kussainov, Arman; White, Yelena

    We are studying the parallel implementations of Grover's closest match search algorithm for neutron monitor data analysis. This includes data formatting, and matching quantum parameters to a conventional structure of a chosen programming language and selected experimental data type. We have employed several workload distribution models based on acquired data and search parameters. As a result of these simulations, we have an understanding of potential problems that may arise during configuration of real quantum computational devices and the way they could run tasks in parallel. The work was supported by the Science Committee of the Ministry of Science and Education of the Republic of Kazakhstan Grant #2532/GF3.

  2. A Foot-Arch Parameter Measurement System Using a RGB-D Camera.

    PubMed

    Chun, Sungkuk; Kong, Sejin; Mun, Kyung-Ryoul; Kim, Jinwook

    2017-08-04

    The conventional method of measuring foot-arch parameters is highly dependent on the measurer's skill level, so accurate measurements are difficult to obtain. To solve this problem, we propose an autonomous geometric foot-arch analysis platform that is capable of capturing the sole of the foot and yields three foot-arch parameters: arch index (AI), arch width (AW) and arch height (AH). The proposed system captures 3D geometric and color data on the plantar surface of the foot in a static standing pose using a commercial RGB-D camera. It detects the region of the foot surface in contact with the footplate by applying the clustering and Markov random field (MRF)-based image segmentation methods. The system computes the foot-arch parameters by analyzing the 2/3D shape of the contact region. Validation experiments were carried out to assess the accuracy and repeatability of the system. The average errors for AI, AW, and AH estimation on 99 data collected from 11 subjects during 3 days were -0.17%, 0.95 mm, and 0.52 mm, respectively. Reliability and statistical analysis on the estimated foot-arch parameters, the robustness to the change of weights used in the MRF, the processing time were also performed to show the feasibility of the system.

  3. A Foot-Arch Parameter Measurement System Using a RGB-D Camera

    PubMed Central

    Kong, Sejin; Mun, Kyung-Ryoul; Kim, Jinwook

    2017-01-01

    The conventional method of measuring foot-arch parameters is highly dependent on the measurer’s skill level, so accurate measurements are difficult to obtain. To solve this problem, we propose an autonomous geometric foot-arch analysis platform that is capable of capturing the sole of the foot and yields three foot-arch parameters: arch index (AI), arch width (AW) and arch height (AH). The proposed system captures 3D geometric and color data on the plantar surface of the foot in a static standing pose using a commercial RGB-D camera. It detects the region of the foot surface in contact with the footplate by applying the clustering and Markov random field (MRF)-based image segmentation methods. The system computes the foot-arch parameters by analyzing the 2/3D shape of the contact region. Validation experiments were carried out to assess the accuracy and repeatability of the system. The average errors for AI, AW, and AH estimation on 99 data collected from 11 subjects during 3 days were −0.17%, 0.95 mm, and 0.52 mm, respectively. Reliability and statistical analysis on the estimated foot-arch parameters, the robustness to the change of weights used in the MRF, the processing time were also performed to show the feasibility of the system. PMID:28777349

  4. Effect of pulsed current GTA welding parameters on the fusion zone microstructure of AA 6061 aluminium alloy

    NASA Astrophysics Data System (ADS)

    Kumar, T. Senthil; Balasubramanian, V.; Babu, S.; Sanavullah, M. Y.

    2007-08-01

    AA6061 aluminium alloy (Al-Mg-Si alloy) has gathered wide acceptance in the fabrication of food processing equipment, chemical containers, passenger cars, road tankers, and railway transport systems. The preferred process for welding these aluminium alloys is frequently Gas Tungsten Arc (GTA) welding due to its comparatively easy applicability and lower cost. In the case of single pass GTA welding of thinner sections of this alloy, the pulsed current has been found beneficial due to its advantages over the conventional continuous current processes. The use of pulsed current parameters has been found to improve the mechanical properties of the welds compared to those of continuous current welds of this alloy due to grain refinement occurring in the fusion zone. In this investigation, an attempt has been made to develop a mathematical model to predict the fusion zone grain diameter incorporating pulsed current welding parameters. Statistical tools such as design of experiments, analysis of variance, and regression analysis are used to develop the mathematical model. The developed model can be effectively used to predict the fusion grain diameter at a 95% confidence level for the given pulsed current parameters. The effect of pulsed current GTA welding parameters on the fusion zone grain diameter of AA 6061 aluminium alloy welds is reported in this paper.

  5. Synthesis of Monodisperse Chitosan Nanoparticles and in Situ Drug Loading Using Active Microreactor.

    PubMed

    Kamat, Vivek; Marathe, Ila; Ghormade, Vandana; Bodas, Dhananjay; Paknikar, Kishore

    2015-10-21

    Chitosan nanoparticles are promising drug delivery vehicles. However, the conventional method of unregulated mixing during ionic gelation limits their application because of heterogeneity in size and physicochemical properties. Therefore, a detailed theoretical analysis of conventional and active microreactor models was simulated. This led to design and fabrication of a polydimethylsiloxane microreactor with magnetic micro needles for the synthesis of monodisperse chitosan nanoparticles. Chitosan nanoparticles synthesized conventionally, using 0.5 mg/mL chitosan, were 250 ± 27 nm with +29.8 ± 8 mV charge. Using similar parameters, the microreactor yielded small size particles (154 ± 20 nm) at optimized flow rate of 400 μL/min. Further optimization at 0.4 mg/mL chitosan concentration yielded particles (130 ± 9 nm) with higher charge (+39.8 ± 5 mV). The well-controlled microreactor-based mixing generated highly monodisperse particles with tunable properties including antifungal drug entrapment (80%), release rate, and effective activity (MIC, 1 μg/mL) against Candida.

  6. The utility of indocyanine green fluorescence imaging during robotic adrenalectomy.

    PubMed

    Colvin, Jennifer; Zaidi, Nisar; Berber, Eren

    2016-08-01

    Indocyanine green (ICG) has been used for medical imaging since 1950s, but has more recently become available for use in minimally invasive surgery owing to improvements in technology. This study investigates the use of ICG florescence to guide an accurate dissection by delineating the borders of adrenal tumors during robotic adrenalectomy (RA). This prospective study compared conventional robotic view with ICG fluorescence imaging in 40 consecutive patients undergoing RA. Independent, non-blinded observers assessed how accurately ICG fluorescence delineated the borders of adrenal tumors compared to conventional robotic view. A total of 40 patients underwent 43 adrenalectomies. ICG imaging was superior, equivalent, or inferior to conventional robotic view in 46.5% (n = 20), 25.6% (n = 11), and 27.9% (n = 12) of the procedures. On univariate analysis, the only parameter that predicted the superiority of ICG imaging over conventional robotic view was the tumor type, with adrenocortical tumors being delineated more accurately on ICG imaging compared to conventional robotic view. This study demonstrates the utility of ICG to guide the dissection and removal of adrenal tumors during RA. A simple reproducible method is reported, with a detailed description of the utility based on tumor type, approach and side. J. Surg. Oncol. 2016;114:153-156. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Simulation and economic analysis of a liquid-based solar system with a direct-contact liquid-liquid heat exchanger, in comparison to a system with a conventional heat exchanger

    NASA Astrophysics Data System (ADS)

    Brothers, P.; Karaki, S.

    Using a solar computer simulation package called TRNSYS, simulations of the direct contact liquid-liquid heat exchanger (DCLLHE) solar system and a system with conventional shell-and-tube heat exchanger were developed, based in part on performance measurements of the actual systems. The two systems were simulated over a full year on an hour-by-hour basis at five locations; Boston, Massachusetts, Charleston, South Carolina, Dodge City, Kansas, Madison, Wisconsin, and Phoenix, Arizona. Typically the direct-contact system supplies slightly more heat for domestic hot water and space heating in all locations and about 5 percentage points more cooling as compared to the conventional system. Using a common set of economic parameters and the appropriate federal and state income tax credits, as well as property tax legislation for solar systems in the corresponding states, the results of the study indicate for heating-only systems, the DCLLHE system has a slight life-cycle cost disadvantage compared to the conventional system. For combined solar heating and cooling systems, the DCLLHE has a slight life-cycle cost advantage which varies with location and amounts to one to three percent difference from the conventional system.

  8. Life-cycle greenhouse gas emissions of shale gas, natural gas, coal, and petroleum.

    PubMed

    Burnham, Andrew; Han, Jeongwoo; Clark, Corrie E; Wang, Michael; Dunn, Jennifer B; Palou-Rivera, Ignasi

    2012-01-17

    The technologies and practices that have enabled the recent boom in shale gas production have also brought attention to the environmental impacts of its use. It has been debated whether the fugitive methane emissions during natural gas production and transmission outweigh the lower carbon dioxide emissions during combustion when compared to coal and petroleum. Using the current state of knowledge of methane emissions from shale gas, conventional natural gas, coal, and petroleum, we estimated up-to-date life-cycle greenhouse gas emissions. In addition, we developed distribution functions for key parameters in each pathway to examine uncertainty and identify data gaps such as methane emissions from shale gas well completions and conventional natural gas liquid unloadings that need to be further addressed. Our base case results show that shale gas life-cycle emissions are 6% lower than conventional natural gas, 23% lower than gasoline, and 33% lower than coal. However, the range in values for shale and conventional gas overlap, so there is a statistical uncertainty whether shale gas emissions are indeed lower than conventional gas. Moreover, this life-cycle analysis, among other work in this area, provides insight on critical stages that the natural gas industry and government agencies can work together on to reduce the greenhouse gas footprint of natural gas.

  9. The reliability and reproducibility of cephalometric measurements: a comparison of conventional and digital methods

    PubMed Central

    AlBarakati, SF; Kula, KS; Ghoneima, AA

    2012-01-01

    Objective The aim of this study was to assess the reliability and reproducibility of angular and linear measurements of conventional and digital cephalometric methods. Methods A total of 13 landmarks and 16 skeletal and dental parameters were defined and measured on pre-treatment cephalometric radiographs of 30 patients. The conventional and digital tracings and measurements were performed twice by the same examiner with a 6 week interval between measurements. The reliability within the method was determined using Pearson's correlation coefficient (r2). The reproducibility between methods was calculated by paired t-test. The level of statistical significance was set at p < 0.05. Results All measurements for each method were above 0.90 r2 (strong correlation) except maxillary length, which had a correlation of 0.82 for conventional tracing. Significant differences between the two methods were observed in most angular and linear measurements except for ANB angle (p = 0.5), angle of convexity (p = 0.09), anterior cranial base (p = 0.3) and the lower anterior facial height (p = 0.6). Conclusion In general, both methods of conventional and digital cephalometric analysis are highly reliable. Although the reproducibility of the two methods showed some statistically significant differences, most differences were not clinically significant. PMID:22184624

  10. The application of neural networks to myoelectric signal analysis: a preliminary study.

    PubMed

    Kelly, M F; Parker, P A; Scott, R N

    1990-03-01

    Two neural network implementations are applied to myoelectric signal (MES) analysis tasks. The motivation behind this research is to explore more reliable methods of deriving control for multidegree of freedom arm prostheses. A discrete Hopfield network is used to calculate the time series parameters for a moving average MES model. It is demonstrated that the Hopfield network is capable of generating the same time series parameters as those produced by the conventional sequential least squares (SLS) algorithm. Furthermore, it can be extended to applications utilizing larger amounts of data, and possibly to higher order time series models, without significant degradation in computational efficiency. The second neural network implementation involves using a two-layer perceptron for classifying a single site MES based on two features, specifically the first time series parameter, and the signal power. Using these features, the perceptron is trained to distinguish between four separate arm functions. The two-dimensional decision boundaries used by the perceptron classifier are delineated. It is also demonstrated that the perceptron is able to rapidly compensate for variations when new data are incorporated into the training set. This adaptive quality suggests that perceptrons may provide a useful tool for future MES analysis.

  11. SU-E-T-269: Differential Hazard Analysis For Conventional And New Linac Acceptance Testing Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harry, T; Yaddanapudi, S; Mutic, S

    Purpose: New techniques and materials have recently been developed to expedite the conventional Linac Acceptance Testing Procedure (ATP). The new ATP method uses the Electronic Portal Imaging Device (EPID) for data collection and is presented separately. This new procedure is meant to be more efficient then conventional methods. While not clinically implemented yet, a prospective risk assessment is warranted for any new techniques. The purpose of this work is to investigate the risks and establish the pros and cons between the conventional approach and the new ATP method. Methods: ATP tests that were modified and performed with the EPID weremore » analyzed. Five domain experts (Medical Physicists) comprised the core analysis team. Ranking scales were adopted from previous publications related to TG 100. The number of failure pathways for each ATP test procedure were compared as well as the number of risk priority numbers (RPN’s) greater than 100 were compared. Results: There were fewer failure pathways with the new ATP compared to the conventional, 262 and 556, respectively. There were fewer RPN’s > 100 in the new ATP compared to the conventional, 41 and 115. Failure pathways and RPN’s > 100 for individual ATP tests on average were 2 and 3.5 times higher in the conventional ATP compared to the new, respectively. The pixel sensitivity map of the EPID was identified as a key hazard to the new ATP procedure with an RPN of 288 for verifying beam parameters. Conclusion: The significant decrease in failure pathways and RPN’s >100 for the new ATP mitigates the possibilities of a catastrophic error occurring. The Pixel Sensitivity Map determining the response and inherent characteristics of the EPID is crucial as all data and hence results are dependent on that process. Grant from Varian Medical Systems Inc.« less

  12. Resolving the Effects of Maternal and Offspring Genotype on Dyadic Outcomes in Genome Wide Complex Trait Analysis (“M-GCTA”)

    PubMed Central

    Pourcain, Beate St.; Smith, George Davey; York, Timothy P.; Evans, David M.

    2014-01-01

    Genome wide complex trait analysis (GCTA) is extended to include environmental effects of the maternal genotype on offspring phenotype (“maternal effects”, M-GCTA). The model includes parameters for the direct effects of the offspring genotype, maternal effects and the covariance between direct and maternal effects. Analysis of simulated data, conducted in OpenMx, confirmed that model parameters could be recovered by full information maximum likelihood (FIML) and evaluated the biases that arise in conventional GCTA when indirect genetic effects are ignored. Estimates derived from FIML in OpenMx showed very close agreement to those obtained by restricted maximum likelihood using the published algorithm for GCTA. The method was also applied to illustrative perinatal phenotypes from ∼4,000 mother-offspring pairs from the Avon Longitudinal Study of Parents and Children. The relative merits of extended GCTA in contrast to quantitative genetic approaches based on analyzing the phenotypic covariance structure of kinships are considered. PMID:25060210

  13. The Value of Data and Metadata Standardization for Interoperability in Giovanni

    NASA Astrophysics Data System (ADS)

    Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.

    2017-12-01

    Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.

  14. Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.

  15. Analysis of LDPE-ZnO-clay nanocomposites using novel cumulative rheological parameters

    NASA Astrophysics Data System (ADS)

    Kracalik, Milan

    2017-05-01

    Polymer nanocomposites exhibit complex rheological behaviour due to physical and also possibly chemical interactions between individual phases. Up to now, rheology of dispersive polymer systems has been usually described by evaluation of viscosity curve (shear thinning phenomenon), storage modulus curve (formation of secondary plateau) or plotting information about dumping behaviour (e.g. Van Gurp-Palmen-plot, comparison of loss factor tan δ). On the contrary to evaluation of damping behaviour, values of cot δ were calculated and called as "storage factor", analogically to loss factor. Then values of storage factor were integrated over specific frequency range and called as "cumulative storage factor". In this contribution, LDPE-ZnO-clay nanocomposites with different dispersion grades (physical networks) have been prepared and characterized by both conventional as well as novel analysis approach. Next to cumulative storage factor, further cumulative rheological parameters like cumulative complex viscosity, cumulative complex modulus or cumulative storage modulus have been introduced.

  16. Boundary condition at a two-phase interface in the lattice Boltzmann method for the convection-diffusion equation.

    PubMed

    Yoshida, Hiroaki; Kobayashi, Takayuki; Hayashi, Hidemitsu; Kinjo, Tomoyuki; Washizu, Hitoshi; Fukuzawa, Kenji

    2014-07-01

    A boundary scheme in the lattice Boltzmann method (LBM) for the convection-diffusion equation, which correctly realizes the internal boundary condition at the interface between two phases with different transport properties, is presented. The difficulty in satisfying the continuity of flux at the interface in a transient analysis, which is inherent in the conventional LBM, is overcome by modifying the collision operator and the streaming process of the LBM. An asymptotic analysis of the scheme is carried out in order to clarify the role played by the adjustable parameters involved in the scheme. As a result, the internal boundary condition is shown to be satisfied with second-order accuracy with respect to the lattice interval, if we assign appropriate values to the adjustable parameters. In addition, two specific problems are numerically analyzed, and comparison with the analytical solutions of the problems numerically validates the proposed scheme.

  17. Specialized sperm function tests in varicocele and the future of andrology laboratory.

    PubMed

    Majzoub, Ahmad; Esteves, Sandro C; Gosálvez, Jaime; Agarwal, Ashok

    2016-01-01

    Varicocele is a common medical condition entangled with many controversies. Though it is highly prevalent in men with infertility, still it marks its presence in males who do have normal fertility. Determining which patients are negatively affected by varicocele would enable clinicians to better select those men who benefitted the most from surgery. Since conventional semen analysis has been limited in its ability to evaluate the negative effects of varicocele on fertility, a multitude of specialized laboratory tests have emerged. In this review, we examine the role and significance of specialized sperm function tests with regards to varicocele. Among the various tests, analysis of sperm DNA fragmentation and measurements of oxidative stress markers provide an independent measure of fertility in men with varicocele. These diagnostic modalities have both diagnostic and prognostic information complementary to, but distinct from conventional sperm parameters. Test results can guide management and aid in monitoring intervention outcomes. Proteomics, metabolomics, and genomics are areas; though still developing, holding promise to revolutionize our understanding of reproductive physiology, including varicocele.

  18. Parametric study of a canard-configured transport using conceptual design optimization

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.

    1985-01-01

    Constrained-parameter optimization is used to perform optimal conceptual design of both canard and conventional configurations of a medium-range transport. A number of design constants and design constraints are systematically varied to compare the sensitivities of canard and conventional configurations to a variety of technology assumptions. Main-landing-gear location and canard surface high-lift performance are identified as critical design parameters for a statically stable, subsonic, canard-configured transport.

  19. Ultrasound assisted synthesis of iron doped TiO2 catalyst.

    PubMed

    Ambati, Rohini; Gogate, Parag R

    2018-01-01

    The present work deals with synthesis of Fe (III) doped TiO 2 catalyst using the ultrasound assisted approach and conventional sol-gel approach with an objective of establishing the process intensification benefits. Effect of operating parameters such as Fe doping, type of solvent, solvent to precursor ratio and initial temperature has been investigated to get the best catalyst with minimum particle size. Comparison of the catalysts obtained using the conventional and ultrasound assisted approach under the optimized conditions has been performed using the characterization techniques like DLS, XRD, BET, SEM, EDS, TEM, FTIR and UV-Vis band gap analysis. It was established that catalyst synthesized by ultrasound assisted approach under optimized conditions of 0.4mol% doping, irradiation time of 60min, propan-2-ol as the solvent with the solvent to precursor ratio as 10 and initial temperature of 30°C was the best one with minimum particle size as 99nm and surface area as 49.41m 2 /g. SEM analysis, XRD analysis as well as the TEM analysis also confirmed the superiority of the catalyst obtained using ultrasound assisted approach as compared to the conventional approach. EDS analysis also confirmed the presence of 4.05mol% of Fe element in the sample of 0.4mol% iron doped TiO 2 . UV-Vis band gap results showed the reduction in band gap from 3.2eV to 2.9eV. Photocatalytic experiments performed to check the activity also confirmed that ultrasonically synthesized Fe doped TiO 2 catalyst resulted in a higher degradation of Acid Blue 80 as 38% while the conventionally synthesized catalyst resulted in a degradation of 31.1%. Overall, the work has clearly established importance of ultrasound in giving better catalyst characteristics as well as activity for degradation of the Acid Blue 80 dye. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Comparisons between conventional optical imaging and parametric indirect microscopic imaging on human skin detection

    NASA Astrophysics Data System (ADS)

    Liu, Guoyan; Gao, Kun; Liu, Xuefeng; Ni, Guoqiang

    2016-10-01

    We report a new method, polarization parameters indirect microscopic imaging with a high transmission infrared light source, to detect the morphology and component of human skin. A conventional reflection microscopic system is used as the basic optical system, into which a polarization-modulation mechanics is inserted and a high transmission infrared light source is utilized. The near-field structural characteristics of human skin can be delivered by infrared waves and material coupling. According to coupling and conduction physics, changes of the optical wave parameters can be calculated and curves of the intensity of the image can be obtained. By analyzing the near-field polarization parameters in nanoscale, we can finally get the inversion images of human skin. Compared with the conventional direct optical microscope, this method can break diffraction limit and achieve a super resolution of sub-100nm. Besides, the method is more sensitive to the edges, wrinkles, boundaries and impurity particles.

  1. Comparative analysis of the outflow water quality of two sustainable linear drainage systems.

    PubMed

    Andrés-Valeri, V C; Castro-Fresno, D; Sañudo-Fontaneda, L A; Rodriguez-Hernandez, J

    2014-01-01

    Three different drainage systems were built in a roadside car park located on the outskirts of Oviedo (Spain): two sustainable urban drainage systems (SUDS), a swale and a filter drain; and one conventional drainage system, a concrete ditch, which is representative of the most frequently used roadside drainage system in Spain. The concentrations of pollutants were analyzed in the outflow of all three systems in order to compare their capacity to improve water quality. Physicochemical water quality parameters such as dissolved oxygen, total suspended solids, pH, electrical conductivity, turbidity and total petroleum hydrocarbons were monitored and analyzed for 25 months. Results are presented in detail showing significantly smaller amounts of outflow pollutants in SUDS than in conventional drainage systems, especially in the filter drain which provided the best performance.

  2. Hybrid Higgs inflation: The use of disformal transformation

    NASA Astrophysics Data System (ADS)

    Sato, Seiga; Maeda, Kei-ichi

    2018-04-01

    We propose a hybrid type of the conventional Higgs inflation and new Higgs inflation models. We perform a disformal transformation into the Einstein frame and analyze the background dynamics and the cosmological perturbations in the truncated model, in which we ignore the higher-derivative terms of the Higgs field. From the observed power spectrum of the density perturbations, we obtain the constraint on the nonminimal coupling constant ξ and the mass parameter M in the derivative coupling. Although the primordial tilt ns in the hybrid model barely changes, the tensor-to-scalar ratio r moves from the value in the new Higgs inflationary model to that in the conventional Higgs inflationary model as |ξ | increases. We confirm our results by numerical analysis by ADM formalism of the full theory in the Jordan frame.

  3. Study on the marine ejector refrigeration-rotary desiccant air-conditioning system

    NASA Astrophysics Data System (ADS)

    Zheng, C. Y.; Zheng, G. J.; Yu, W. S.; Chen, W.

    2017-08-01

    A newly developed ejector refrigeration-rotary desiccant air-conditioning (ERRD A/C) system is proposed to recover ship waste heat as far as possible. Its configuration is built firstly, then its advantages are analyzed, after that, with the help of psychrometric chart, some important parameters such as power consumption, steam consumption and COP of ERRD A/C system are calculated theoretically under design conditions of a real marine A/C, and comparative analysis with conventional A/C is deployed. The results show that the power consumption of ERRD A/C system is only 32.87% of conventional A/C, which meant that ERRD A/C system has potential to make full use of ship waste heat to realize energy saving and environmental protection when using green refrigerant such as water.

  4. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    PubMed

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  5. Combining Diffusion Tensor Metrics and DSC Perfusion Imaging: Can It Improve the Diagnostic Accuracy in Differentiating Tumefactive Demyelination from High-Grade Glioma?

    PubMed

    Hiremath, S B; Muraleedharan, A; Kumar, S; Nagesh, C; Kesavadas, C; Abraham, M; Kapilamoorthy, T R; Thomas, B

    2017-04-01

    Tumefactive demyelinating lesions with atypical features can mimic high-grade gliomas on conventional imaging sequences. The aim of this study was to assess the role of conventional imaging, DTI metrics ( p:q tensor decomposition), and DSC perfusion in differentiating tumefactive demyelinating lesions and high-grade gliomas. Fourteen patients with tumefactive demyelinating lesions and 21 patients with high-grade gliomas underwent brain MR imaging with conventional, DTI, and DSC perfusion imaging. Imaging sequences were assessed for differentiation of the lesions. DTI metrics in the enhancing areas and perilesional hyperintensity were obtained by ROI analysis, and the relative CBV values in enhancing areas were calculated on DSC perfusion imaging. Conventional imaging sequences had a sensitivity of 80.9% and specificity of 57.1% in differentiating high-grade gliomas ( P = .049) from tumefactive demyelinating lesions. DTI metrics ( p : q tensor decomposition) and DSC perfusion demonstrated a statistically significant difference in the mean values of ADC, the isotropic component of the diffusion tensor, the anisotropic component of the diffusion tensor, the total magnitude of the diffusion tensor, and rCBV among enhancing portions in tumefactive demyelinating lesions and high-grade gliomas ( P ≤ .02), with the highest specificity for ADC, the anisotropic component of the diffusion tensor, and relative CBV (92.9%). Mean fractional anisotropy values showed no significant statistical difference between tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI and DSC parameters improved the diagnostic accuracy (area under the curve = 0.901). Addition of a heterogeneous enhancement pattern to DTI and DSC parameters improved it further (area under the curve = 0.966). The sensitivity increased from 71.4% to 85.7% after the addition of the enhancement pattern. DTI and DSC perfusion add profoundly to conventional imaging in differentiating tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI metrics and DSC perfusion markedly improved diagnostic accuracy. © 2017 by American Journal of Neuroradiology.

  6. Utility of Modified Ultrafast Papanicolaou Stain in Cytological Diagnosis

    PubMed Central

    Arakeri, Surekha Ulhas

    2017-01-01

    Introduction Need for minimal turnaround time for assessing Fine Needle Aspiration Cytology (FNAC) has encouraged innovations in staining techniques that require lesser staining time with unequivocal cell morphology. The standard protocol for conventional Papanicolaou (PAP) stain requires about 40 minutes. To overcome this, Ultrafast Papanicolaou (UFP) stain was introduced which reduces staining time to 90 seconds and also enhances the quality. However, reagents required for this were not easily available hence, Modified Ultrafast Papanicolaou (MUFP) stain was introduced subsequently. Aim To assess the efficacy of MUFP staining by comparing the quality of MUFP stain with conventional PAP stain. Materials and Methods FNAC procedure was performed by using 10 ml disposable syringe and 22-23 G needle. Total 131 FNAC cases were studied which were lymph node (30), thyroid (38), breast (22), skin and soft tissue (24), salivary gland (11) and visceral organs (6). Two smears were prepared and stained by MUFP and conventional PAP stain. Scores were given on four parameters: background of smears, overall staining pattern, cell morphology and nuclear staining. Quality Index (QI) was calculated from ratio of total score achieved to maximum score possible. Statistical analysis using chi square test was applied to each of the four parameters before obtaining the QI in both stains. Students t-test was applied to evaluate the efficacy of MUFP in comparison with conventional PAP stain. Results The QI of MUFP for thyroid, breast, lymph node, skin and soft tissue, salivary gland and visceral organs was 0.89, 0.85, 0.89, 0.83, 0.92, and 0.78 respectively. Compared to conventional PAP stain QI of MUFP smears was better in all except visceral organ cases and was statistically significant. MUFP showed clear red blood cell background, transparent cytoplasm and crisp nuclear features. Conclusion MUFP is fast, reliable and can be done with locally available reagents with unequivocal morphology which is the need of the hour for a cytopathology set-up. PMID:28511391

  7. Utility of Modified Ultrafast Papanicolaou Stain in Cytological Diagnosis.

    PubMed

    Sinkar, Prachi; Arakeri, Surekha Ulhas

    2017-03-01

    Need for minimal turnaround time for assessing Fine Needle Aspiration Cytology (FNAC) has encouraged innovations in staining techniques that require lesser staining time with unequivocal cell morphology. The standard protocol for conventional Papanicolaou (PAP) stain requires about 40 minutes. To overcome this, Ultrafast Papanicolaou (UFP) stain was introduced which reduces staining time to 90 seconds and also enhances the quality. However, reagents required for this were not easily available hence, Modified Ultrafast Papanicolaou (MUFP) stain was introduced subsequently. To assess the efficacy of MUFP staining by comparing the quality of MUFP stain with conventional PAP stain. FNAC procedure was performed by using 10 ml disposable syringe and 22-23 G needle. Total 131 FNAC cases were studied which were lymph node (30), thyroid (38), breast (22), skin and soft tissue (24), salivary gland (11) and visceral organs (6). Two smears were prepared and stained by MUFP and conventional PAP stain. Scores were given on four parameters: background of smears, overall staining pattern, cell morphology and nuclear staining. Quality Index (QI) was calculated from ratio of total score achieved to maximum score possible. Statistical analysis using chi square test was applied to each of the four parameters before obtaining the QI in both stains. Students t-test was applied to evaluate the efficacy of MUFP in comparison with conventional PAP stain. The QI of MUFP for thyroid, breast, lymph node, skin and soft tissue, salivary gland and visceral organs was 0.89, 0.85, 0.89, 0.83, 0.92, and 0.78 respectively. Compared to conventional PAP stain QI of MUFP smears was better in all except visceral organ cases and was statistically significant. MUFP showed clear red blood cell background, transparent cytoplasm and crisp nuclear features. MUFP is fast, reliable and can be done with locally available reagents with unequivocal morphology which is the need of the hour for a cytopathology set-up.

  8. A Novel Protocol for Model Calibration in Biological Wastewater Treatment

    PubMed Central

    Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen

    2015-01-01

    Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959

  9. Massive Weight Loss Obtained by Bariatric Surgery Affects Semen Quality in Morbid Male Obesity: a Preliminary Prospective Double-Armed Study.

    PubMed

    Samavat, Jinous; Cantini, Giulia; Lotti, Francesco; Di Franco, Alessandra; Tamburrino, Lara; Degl'Innocenti, Selene; Maseroli, Elisa; Filimberti, Erminio; Facchiano, Enrico; Lucchese, Marcello; Muratori, Monica; Forti, Gianni; Baldi, Elisabetta; Maggi, Mario; Luconi, Michaela

    2018-01-01

    The aim of this study is to evaluate the effect of massive weight loss on the seminal parameters at 6 months from bariatric surgery. Two-armed prospective study performed in 31 morbidly obese men, undergoing laparoscopic roux-en-Y-gastric bypass (n = 23) or non-operated (n = 8), assessing sex hormones, conventional (sperm motility, morphology, number, semen volume), and non-conventional (DNA fragmentation and seminal interleukin-8), semen parameters, at baseline and after 6 months from surgery or patients' recruitment. In operated patients only, a statistically significant improvement in the sex hormones was confirmed. Similarly, a positive trend in the progressive/total sperm motility and number was observed, though only the increase in semen volume and viability was statistically significant (Δ = 0.6 ml and 10%, P < 0.05, respectively). A decrease in the seminal interleukin-8 levels and in the sperm DNA fragmentation was also present after bariatric surgery, whereas these parameters even increased in non-operated subjects. Age-adjusted multivariate analysis showed that the BMI variations significantly correlated with the changes in the sperm morphology (β = -0.675, P = 0.025), sperm number (β = 0.891, P = 0.000), and semen volume (r = 0.618, P = 0.015). The massive weight loss obtained with bariatric surgery was associated with an improvement in some semen parameters. The correlations found between weight loss and semen parameter variations after surgery suggest that these might occur early downstream of the testis and more slowly than the changes in the sex hormones.

  10. The use of SESK as a trend parameter for localized bearing fault diagnosis in induction machines.

    PubMed

    Saidi, Lotfi; Ben Ali, Jaouher; Benbouzid, Mohamed; Bechhoefer, Eric

    2016-07-01

    A critical work of bearing fault diagnosis is locating the optimum frequency band that contains faulty bearing signal, which is usually buried in the noise background. Now, envelope analysis is commonly used to obtain the bearing defect harmonics from the envelope signal spectrum analysis and has shown fine results in identifying incipient failures occurring in the different parts of a bearing. However, the main step in implementing envelope analysis is to determine a frequency band that contains faulty bearing signal component with the highest signal noise level. Conventionally, the choice of the band is made by manual spectrum comparison via identifying the resonance frequency where the largest change occurred. In this paper, we present a squared envelope based spectral kurtosis method to determine optimum envelope analysis parameters including the filtering band and center frequency through a short time Fourier transform. We have verified the potential of the spectral kurtosis diagnostic strategy in performance improvements for single-defect diagnosis using real laboratory-collected vibration data sets. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Membrane filtration device for studying compression of fouling layers in membrane bioreactors

    PubMed Central

    Bugge, Thomas Vistisen; Larsen, Poul; Nielsen, Per Halkjær; Christensen, Morten Lykkegaard

    2017-01-01

    A filtration devise was developed to assess compressibility of fouling layers in membrane bioreactors. The system consists of a flat sheet membrane with air scouring operated at constant transmembrane pressure to assess the influence of pressure on resistance of fouling layers. By fitting a mathematical model, three model parameters were obtained; a back transport parameter describing the kinetics of fouling layer formation, a specific fouling layer resistance, and a compressibility parameter. This stands out from other on-site filterability tests as model parameters to simulate filtration performance are obtained together with a characterization of compressibility. Tests on membrane bioreactor sludge showed high reproducibility. The methodology’s ability to assess compressibility was tested by filtrations of sludges from membrane bioreactors and conventional activated sludge wastewater treatment plants from three different sites. These proved that membrane bioreactor sludge showed higher compressibility than conventional activated sludge. In addition, detailed information on the underlying mechanisms of the difference in fouling propensity were obtained, as conventional activated sludge showed slower fouling formation, lower specific resistance and lower compressibility of fouling layers, which is explained by a higher degree of flocculation. PMID:28749990

  12. RMB identification based on polarization parameters inversion imaging

    NASA Astrophysics Data System (ADS)

    Liu, Guoyan; Gao, Kun; Liu, Xuefeng; Ni, Guoqiang

    2016-10-01

    Social order is threatened by counterfeit money. Conventional anti-counterfeit technology is much too old to identify its authenticity or not. The intrinsic difference between genuine notes and counterfeit notes is its paper tissue. In this paper a new technology of detecting RMB is introduced, the polarization parameter indirect microscopic imaging technique. A conventional reflection microscopic system is used as the basic optical system, and inserting into it with polarization-modulation mechanics. The near-field structural characteristics can be delivered by optical wave and material coupling. According to coupling and conduction physics, calculate the changes of optical wave parameters, then get the curves of the intensity of the image. By analyzing near-field polarization parameters in nanoscale, finally calculate indirect polarization parameter imaging of the fiber of the paper tissue in order to identify its authenticity.

  13. Thermal optimum design for tracking primary mirror of Space Telescope

    NASA Astrophysics Data System (ADS)

    Pan, Hai-jun; Ruan, Ping; Li, Fu; Wang, Hong-Wei

    2011-08-01

    In the conventional method, the structural parameters of primary mirror are usually optimized just by the requirement of mechanical performance. Because the influences of structural parameters on thermal stability are not taken fully into account in this simple method, the lightweight optimum design of primary mirror usually brings the bad thermal stability, especially in the complex environment. In order to obtain better thermal stability, a new method about structure-thermal optimum design of tracking primary mirror is discussed. During the optimum process, both the lightweight ratio and thermal stability will be taken into account. The structure-thermal optimum is introduced into the analysis process and commenced after lightweight design as the secondary optimum. Using the engineering analysis of software ANSYS, a parameter finite element analysis (FEA) model of mirror is built. On the premise of appropriate lightweight ratio, the RMS of structure-thermal deformation of mirror surface and lightweight ratio are assigned to be state variables, and the maximal RMS of temperature gradient load to be object variable. The results show that certain structural parameters of tracking primary mirror have different influences on mechanical performance and thermal stability, even they are opposite. By structure-thermal optimizing, the optimized mirror model discussed in this paper has better thermal stability than the old one under the same thermal loads, which can drastically reduce difficulty in thermal control.

  14. Echocardiographic assessment of right ventricular function in routine practice: Which parameters are useful to predict one-year outcome in advanced heart failure patients with dilated cardiomyopathy?

    PubMed

    Kawata, Takayuki; Daimon, Masao; Kimura, Koichi; Nakao, Tomoko; Lee, Seitetsu L; Hirokawa, Megumi; Kato, Tomoko S; Watanabe, Masafumi; Yatomi, Yutaka; Komuro, Issei

    2017-10-01

    Right ventricular (RV) function has recently gained attention as a prognostic predictor of outcome even in patients who have left-sided heart failure. Since several conventional echocardiographic parameters of RV systolic function have been proposed, our aim was to determine if any of these parameters (tricuspid annular plane systolic excursion: TAPSE, tissue Doppler derived systolic tricuspid annular motion velocity: S', fractional area change: FAC) are associated with outcome in advanced heart failure patients with dilated cardiomyopathy (DCM). We retrospectively enrolled 68 DCM patients, who were New York Heart Association (NYHA) Class III or IV and had a left ventricular (LV) ejection fraction <35%. All patients were undergoing evaluation for heart transplantation or management of heart failure. Primary outcomes were defined as LV assist device implantation or cardiac death within one year. Thirty-nine events occurred (5 deaths, 32 LV assist devices implanted). Univariate analysis showed that age, systolic blood pressure, heart rate, NYHA functional class IV, plasma brain natriuretic peptide concentration, intravenous inotrope use, left atrial volume index, and FAC were associated with outcome, whereas TAPSE and S' were not. Receiver-operating characteristic curve analysis showed that the optimal FAC cut-off value to identify patients with an event was <26.7% (area under the curve=0.74). The event-free rate determined by Kaplan-Meier analysis was significantly higher in patients with FAC≥26.7% than in those with FAC<26.7% (log-lank, p=0.0003). Moreover, the addition of FAC<26.7% improved the prognostic utility of a model containing clinical variables and conventional echocardiographic indexes. FAC may provide better prognostic information than TAPSE or S' in advanced heart failure patients with DCM. Copyright © 2017 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  15. Metabonomic analysis of urine from rats after low-dose exposure to 3-chloro-1,2-propanediol using UPLC-MS.

    PubMed

    Liu, Liyan; He, Yujie; Lu, Huimin; Wang, Maoqing; Sun, Changhao; Na, Lixin; Li, Ying

    2013-05-15

    To study the toxic effect of chronic exposure to 3-chloro-1,2-propanediol (3-MCPD) at low doses, a metabonomics approach based on ultrahigh-performance liquid chromatography and quadruple time-of-flight mass spectrometry (UPLC-Q-TOF-MS) was performed. Two different doses of 3-MCPD (1.1 and 5.5mg/kg bw/d) were administered to Wistar rats for 120 days (1.1mg/kg bw/d: lowest observed adverse effect level [LOAEL]). The metabolite profiles and biochemical parameters were obtained at five time points after treatment. For the 3-MCPD-treated groups, a significant change in urinary N-acetyl-β-d-glucosaminidase and β-d-galactosidase was detected on day 90, while some biomarkers based on the metabonomics, such as N-acetylneuraminic acid, N-acetyl-l-tyrosine, and gulonic acid, were detected on day 30. These results suggest that these biomarkers changed more sensitively and earlier than conventional biochemical parameters and were thus considered early and sensitive biomarkers of exposure to 3-MCPD; these biomarkers provide more information on toxicity than conventional biochemical parameters. These results might be helpful to investigate the toxic mechanisms of 3-MCPD and provide a scientific basis for assessing the effect of chronic exposure to low-dose 3-MCPD on human health. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  16. Reliable enumeration of malaria parasites in thick blood films using digital image analysis.

    PubMed

    Frean, John A

    2009-09-23

    Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a digital microscope camera, personal computer and good quality staining of slides are potentially reasonably easy to meet.

  17. Squeezed light from conventionally pumped multi-level lasers

    NASA Technical Reports Server (NTRS)

    Ralph, T. C.; Savage, C. M.

    1992-01-01

    We have calculated the amplitude squeezing in the output of several conventionally pumped multi-level lasers. We present results which show that standard laser models can produce significantly squeezed outputs in certain parameter ranges.

  18. Economic Evaluation of Complementary and Alternative Medicine in Oncology: Is There a Difference Compared to Conventional Medicine?

    PubMed

    Huebner, Jutta; Prott, Franz J; Muecke, Ralph; Stoll, Christoph; Buentzel, Jens; Muenstedt, Karsten; Micke, Oliver

    2017-01-01

    To analyze the financial burden of complementary and alternative medicine (CAM) in cancer treatment. Based on a systematic search of the literature (Medline and the Cochrane Library, combining the MeSH terms 'complementary therapies', 'neoplasms', 'costs', 'cost analysis', and 'cost-benefit analysis'), an expert panel discussed different types of analyses and their significance for CAM in oncology. Of 755 publications, 43 met our criteria. The types of economic analyses and their parameters discussed for CAM in oncology were cost, cost-benefit, cost-effectiveness, and cost-utility analyses. Only a few articles included arguments in favor of or against these different methods, and only a few arguments were specific for CAM because most CAM methods address a broad range of treatment aim parameters to assess effectiveness and are hard to define. Additionally, the choice of comparative treatments is difficult. To evaluate utility, healthy subjects may not be adequate as patients with a life-threatening disease and may be judged differently, especially with respect to a holistic treatment approach. We did not find any arguments in the literature that were directed at the economic analysis of CAM in oncology. Therefore, a comprehensive approach assessment based on criteria from evidence-based medicine evaluating direct and indirect costs is recommended. The usual approaches to conventional medicine to assess costs, benefits, and effectiveness seem adequate in the field of CAM in oncology. Additionally, a thorough deliberation on the comparator, endpoints, and instruments is mandatory for designing studies. © 2016 S. Karger AG, Basel.

  19. Ultrasound assisted crystallization of mefenamic acid: Effect of operating parameters and comparison with conventional approach.

    PubMed

    Iyer, Sneha R; Gogate, Parag R

    2017-01-01

    The current work investigates the application of low intensity ultrasonic irradiation for improving the cooling crystallization of Mefenamic Acid for the first time. The crystal shape and size has been analyzed with the help of optical microscope and image analysis software respectively. The effect of ultrasonic irradiation on crystal size, particle size distribution (PSD) and yield has been investigated, also establishing the comparison with conventional approach. It has been observed that application of ultrasound not only enhances the yield but also reduces the induction time for crystallization as compared to conventional cooling crystallization technique. In the presence of ultrasound, the maximum yield was obtained at optimum conditions of power dissipation of 30W and ultrasonic irradiation time of 10min. The yield was further improved by application of ultrasound in cycles where the formed crystals are allowed to grow in the absence of ultrasonic irradiation. It was also observed that the desired crystal morphology was obtained for the ultrasound assisted crystallization. The conventionally obtained needle shaped crystals transformed into plate shaped crystals for the ultrasound assisted crystallization. The particle size distribution was analyzed using statistical means on the basis of skewness and kurtosis values. It was observed that the skewness and excess kurtosis value for ultrasound assisted crystallization was significantly lower as compared to the conventional approach. XRD analysis also revealed better crystal properties for the processed mefenamic acid using ultrasound assisted approach. The overall process intensification benefits of mefenamic acid crystallization using the ultrasound assisted approach were reduced particle size, increase in the yield and uniform PSD coupled with desired morphology. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  1. Recent developments in fast kurtosis imaging

    NASA Astrophysics Data System (ADS)

    Hansen, Brian; Jespersen, Sune N.

    2017-09-01

    Diffusion kurtosis imaging (DKI) is an extension of the popular diffusion tensor imaging (DTI) technique. DKI takes into account leading deviations from Gaussian diffusion stemming from a number of effects related to the microarchitecture and compartmentalization in biological tissues. DKI therefore offers increased sensitivity to subtle microstructural alterations over conventional diffusion imaging such as DTI, as has been demonstrated in numerous reports. For this reason, interest in routine clinical application of DKI is growing rapidly. In an effort to facilitate more widespread use of DKI, recent work by our group has focused on developing experimentally fast and robust estimates of DKI metrics. A significant increase in speed is made possible by a reduction in data demand achieved through rigorous analysis of the relation between the DKI signal and the kurtosis tensor based metrics. The fast DKI methods therefore need only 13 or 19 images for DKI parameter estimation compared to more than 60 for the most modest DKI protocols applied today. Closed form solutions also ensure rapid calculation of most DKI metrics. Some parameters can even be reconstructed in real time, which may be valuable in the clinic. The fast techniques are based on conventional diffusion sequences and are therefore easily implemented on almost any clinical system, in contrast to a range of other recently proposed advanced diffusion techniques. In addition to its general applicability, this also ensures that any acceleration achieved in conventional DKI through sequence or hardware optimization will also translate directly to fast DKI acquisitions. In this review, we recapitulate the theoretical basis for the fast kurtosis techniques and their relation to conventional DKI. We then discuss the currently available variants of the fast DKI methods, their strengths and weaknesses, as well as their respective realms of application. These range from whole body applications to methods mostly suited for spinal cord or peripheral nerve, and analysis specific to brain white matter. Having covered these technical aspects, we proceed to review the fast kurtosis literature including validation studies, organ specific optimization studies and results from clinical applications.

  2. On wings and keels (2)

    NASA Astrophysics Data System (ADS)

    Slooff, J. W.

    1985-05-01

    The physical mechanisms governing the hydrodynamics of sailing yacht keels and the parameters that, through these mechanisms, determine keel performance are discussed. It is concluded that due to the presence of the free water surface optimum keel shapes differ from optimum shapes for aircraft wings. Utilizing computational fluid dynamics analysis and optimization it is found that the performance of conventional keels can be improved significantly by reducing taper or even applying inverse taper (upside-down keel) and that decisive improvements in performance can be realized through keels with winglets.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  4. Technology for design of transport aircraft. Lecture notes for MIT courses: Seminar 1.61 freshman seminar in air transportation and graduate course 1.201, transportation systems analysis

    NASA Technical Reports Server (NTRS)

    Simpson, R. W.

    1972-01-01

    The design parameters which determine cruise performance for a conventional subsonic jet transport are discussed. It is assumed that the aircraft burns climb fuel to reach cruising altitude and that aeronautical technology determines the ability to carry a given payload at cruising altitude. It is shown that different sizes of transport aircraft are needed to provide the cost optimal vehicle for different given payload-range objectives.

  5. Comet assay: a prognostic tool for DNA integrity assessment in infertile men opting for assisted reproduction.

    PubMed

    Shamsi, M B; Venkatesh, S; Tanwar, M; Singh, G; Mukherjee, S; Malhotra, N; Kumar, R; Gupta, N P; Mittal, S; Dada, R

    2010-05-01

    The growing concern on transmission of genetic diseases in assisted reproduction technique (ART) and the lacunae in the conventional semen analysis to accurately predict the semen quality has led to the need for new techniques to identify the best quality sperm that can be used in assisted procreation techniques. This study analyzes the sperm parameters in the context of DNA damage in cytogenetically normal, AZF non deleted infertile men for DNA damage by comet assay. Seventy infertile men and 40 fertile controls were evaluated for the semen quality by conventional semen parameters and the sperms were also analyzed for DNA integrity by comet assay. The patients were classified into oligozoospermic (O), asthenozoospermic (A), teratozoospermic (T), oligoasthenoteratozoospermic (OAT) categories and infertile men with normal semen profile. The extent of DNA damage was assessed by visual scoring method of comets. Idiopathic infertile men with normal semen profile (n=18) according to conventional method and patients with history of spontaneous abortions and normal semen profile (n=10) had high degree of DNA damage (29 and 47% respectively) as compared to fertile controls (7%). The O, A, T and OAT categories of patients had a variably higher DNA damage load as compared to fertile controls. The normal range and threshold for DNA damage as a predictor of male fertility potential and technique which could assess the sperm DNA damage are necessary to lower the trauma of couples experiencing recurrent spontaneous abortion or failure in ART.

  6. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  7. Improved Conjugate Gradient Bundle Adjustment of Dunhuang Wall Painting Images

    NASA Astrophysics Data System (ADS)

    Hu, K.; Huang, X.; You, H.

    2017-09-01

    Bundle adjustment with additional parameters is identified as a critical step for precise orthoimage generation and 3D reconstruction of Dunhuang wall paintings. Due to the introduction of self-calibration parameters and quasi-planar constraints, the structure of coefficient matrix of the reduced normal equation is banded-bordered, making the solving process of bundle adjustment complex. In this paper, Conjugate Gradient Bundle Adjustment (CGBA) method is deduced by calculus of variations. A preconditioning method based on improved incomplete Cholesky factorization is adopt to reduce the condition number of coefficient matrix, as well as to accelerate the iteration rate of CGBA. Both theoretical analysis and experimental results comparison with conventional method indicate that, the proposed method can effectively conquer the ill-conditioned problem of normal equation and improve the calculation efficiency of bundle adjustment with additional parameters considerably, while maintaining the actual accuracy.

  8. Theoretical study of the local structures and the EPR parameters for RLNKB glasses with VO2+ and Cu2+ dopants

    NASA Astrophysics Data System (ADS)

    Ding, Chang-Chun; Wu, Shao-Yi; Wu, Li-Na; Zhang, Li-Juan; Peng, Li; Wu, Ming-He; Teng, Bao-Hua

    2018-02-01

    The electron paramagnetic resonance (EPR) parameters and local structures for impurities VO2+ and Cu2+ in RO-Li2O-Na2O-K2O-B2O3 (RLNKB; R = Zn, Mg, Sr and Ba) glasses are theoretically investigated by using the perturbation formulas of the EPR parameters for tetragonally compressed octahedral 3d1 and tetragonally elongated octahedral 3d9 clusters, respectively. The VO2+ and Cu2+ dopants are found to undergo the tetragonal compression (characterized by the negative relative distortion ratios ρ ≈ -3%, -0.98%, -1% and -0.8% for R = Zn, Mg, Sr and Ba) and elongation (characterized by the positive relative distortion ratios ρ ≈ 29%, 17%, 16% and 28%), respectively, due to the Jahn-Teller effect. Both dopants show similar overall decreasing trends of cubic field parameter Dq and covalency factor N with decreasing electronegativity of alkali earth cation R. The conventional optical basicities Λth and local optical basicities Λloc are calculated for both systems, and the local Λloc are higher for Cu2+ than for VO2+ in the same RLNKB glass, despite the opposite relationship for the conventional Λth. This point is supported by the weaker covalency or stronger ionicity for Cu2+ than VO2+ in the same RLNKB system, characterized by the larger N in the former. The above comparative analysis on the spectral and local structural properties would be helpful to understand structures and spectroscopic properties for the similar oxide glasses with transition-metal dopants of complementary electronic configurations.

  9. Dose comparison between conventional and quasi-monochromatic systems for diagnostic radiology

    NASA Astrophysics Data System (ADS)

    Baldelli, P.; Taibi, A.; Tuffanelli, A.; Gambaccini, M.

    2004-09-01

    Several techniques have been introduced in the last year to reduce the dose to the patient by minimizing the risk of tumour induced by radiation. In this work the radiological potential of dose reduction in quasi-monochromatic spectra produced via mosaic crystal Bragg diffraction has been evaluated, and a comparison with conventional spectra has been performed for four standard examinations: head, chest, abdomen and lumbar sacral spine. We have simulated quasi-monochromatic x-rays with the Shadow code, and conventional spectra with the Spectrum Processor. By means of the PCXMC software, we have simulated four examinations according to parameters established by the European Guidelines, and calculated absorbed dose for principal organs and the effective dose. Simulations of quasi-monochromatic laminar beams have been performed without anti-scatter grid, because of their inherent scatter geometry, and compared with simulations with conventional beams with anti-scatter grids. Results have shown that the dose reduction due to the introduction of quasi-monochromatic x-rays depends on different parameters related to the quality of the beam, the organ composition and the anti-scatter grid. With parameters chosen in this study a significant dose reduction can be achieved for two out of four kinds of examination.

  10. Low-level laser therapy of myofascial pain syndromes of patients with osteoarthritis of knee and hip joints

    NASA Astrophysics Data System (ADS)

    Gasparyan, Levon V.

    2001-04-01

    The purpose of the given research is the comparison of efficiency of conventional treatment of myofascial pain syndromes of patients with osteoarthritis (OA) of hip and knee joints and therapy with additional application of low level laser therapy (LLLT) under dynamic control of clinical picture, rheovasographic, electromyographic examinations, and parameters of peroxide lipid oxidation. The investigation was made on 143 patients with OA of hip and knee joints. Patients were randomized in 2 groups: basic group included 91 patients, receiving conventional therapy with a course of LLLT, control group included 52 patients, receiving conventional treatment only. Transcutaneous ((lambda) equals 890 nm, output peak power 5 W, frequency 80 - 3000 Hz) and intravenous ((lambda) equals 633 nm, output 2 mW in the vein) laser irradiation were used for LLLT. Studied showed, that clinical efficiency of LLLT in the complex with conventional treatment of myofascial pain syndromes at the patients with OA is connected with attenuation of pain syndrome, normalization of parameters of myofascial syndrome, normalization of the vascular tension and parameters of rheographic curves, as well as with activation of antioxidant protection system.

  11. Head-to-Head Comparison of Global Longitudinal Strain Measurements among Nine Different Vendors: The EACVI/ASE Inter-Vendor Comparison Study.

    PubMed

    Farsalinos, Konstantinos E; Daraban, Ana M; Ünlü, Serkan; Thomas, James D; Badano, Luigi P; Voigt, Jens-Uwe

    2015-10-01

    This study was planned by the EACVI/ASE/Industry Task Force to Standardize Deformation Imaging to (1) test the variability of speckle-tracking global longitudinal strain (GLS) measurements among different vendors and (2) compare GLS measurement variability with conventional echocardiographic parameters. Sixty-two volunteers were studied using ultrasound systems from seven manufacturers. Each volunteer was examined by the same sonographer on all machines. Inter- and intraobserver variability was determined in a true test-retest setting. Conventional echocardiographic parameters were acquired for comparison. Using the software packages of the respective manufacturer and of two software-only vendors, endocardial GLS was measured because it was the only GLS parameter that could be provided by all manufactures. We compared GLSAV (the average from the three apical views) and GLS4CH (measured in the four-chamber view) measurements among vendors and with the conventional echocardiographic parameters. Absolute values of GLSAV ranged from 18.0% to 21.5%, while GLS4CH ranged from 17.9% to 21.4%. The absolute difference between vendors for GLSAV was up to 3.7% strain units (P < .001). The interobserver relative mean errors were 5.4% to 8.6% for GLSAV and 6.2% to 11.0% for GLS4CH, while the intraobserver relative mean errors were 4.9% to 7.3% and 7.2% to 11.3%, respectively. These errors were lower than for left ventricular ejection fraction and most other conventional echocardiographic parameters. Reproducibility of GLS measurements was good and in many cases superior to conventional echocardiographic measurements. The small but statistically significant variation among vendors should be considered in performing serial studies and reflects a reference point for ongoing standardization efforts. Copyright © 2015 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  12. [The safety and effect of transhepatic hilar approach for the treatment of bismuth type Ⅲ and Ⅳ hilar cholangiocarcinoma].

    PubMed

    He, M; Wang, H L; Yan, J Y; Xu, S W; Chen, W; Wang, J

    2018-05-01

    Objective: To compare the efficiency between the transhepatic hilar approach and conventional approach for the surgical treatment of Bismuth type Ⅲ and Ⅳ hilar cholangiocarcinoma. Methods: There were 42 consecutive patients with hilar cholangiocarcinoma of Bismuth type Ⅲ and Ⅳ who underwent surgical treatment at Department of Biliary-Pancreatic Surgery, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University from January 2008 to December 2013.The transhepatic hilar approach was used in 19 patients and conventional approach was performed in 23 patients.There were no differences in clinical parameters between the two groups(all P >0.05). The t-test was used to analyze the measurement data, and the χ(2) test was used to analyze the count data.Kaplan-Meier analysis was used to analyze the survival period.Multivariate COX regression analysis was used to analyze the prognosis factors. Results: Among the 19 patients who underwent transhepatic hilar approach, 3 patients changed the operative planning after reevaluated by exposing the hepatic hilus.The intraoperative blood was 300(250-400)ml in the transhepatic hilar approach group, which was significantly less than the conventional approach group, 800(450-1 300)ml( t =4.276, P =0.00 1), meanwhile, the R0 resection rate was significantly higher in the transhepatic hilar approach group than in the conventional approach group(89.4% vs . 52.2; χ(2)=6.773, P =0.009) and the 3-year and 5-year cumulative survival rate was better in the transhepatic hilar approach group than in the conventional approach group(63.2% vs . 47.8%, 26.3% vs . 0; χ(2)=66.363, 127.185, P =0.000). On univariate analysis, transhepatic hilar approach, intraoperative blood loss, intraoperative blood transfusion, R0 resection and lymph node metastasis were significant risk factors for patient survival(all P <0.05). On multivariate analysis, use of transhepatic hilar approach, intraoperative blood loss, R0 resection and lymph node metastasis were significant independent risk factors for patient survival(all P <0.05). Conclusion: The transhepatic hilar approach is the preferred technique for surgical treatment for hilar cholangiocarcinoma because it can improve accuracy of surgical planning, safety of operation, R0 resection rate and survival rate compared with the conventional approach.

  13. Pharmacovigilance in Space: Stability Payload Compliance Procedures

    NASA Technical Reports Server (NTRS)

    Daniels, Vernie R.; Putcha, Lakshmi

    2007-01-01

    Pharmacovigilance is the science of, and activities relating to the detection, assessment, understanding, and prevention of drug-related problems. Over the lase decade, pharmacovigilance activities have contributed to the development of numerous technological and conventional advances focused on medication safety and regulatory intervention. The topics discussed include: 1) Proactive Pharmacovigilance; 2) A New Frontier; 3) Research Activities; 4) Project Purpose; 5) Methods; 6) Flight Stability Kit Components; 7) Experimental Conditions; 8) Research Project Logistics; 9) Research Plan; 10) Pharmaceutical Stability Research Project Pharmacovigilance Aspects; 11) Security / Control; 12) Packaging/Containment Actions; 13) Shelf-Life Assessments; 14) Stability Assessment Parameters; 15) Chemical Content Analysis; 16) Preliminary Results; 17) Temperature/Humidity; 18) Changes in PHysical and Chemical Assessment Parameters; 19) Observations; and 20) Conclusions.

  14. [Significance of considering hemoglobin derivatives and acid-base balance in the evaluation of the blood oxygen-transport system].

    PubMed

    Matiushichev, V B; Shamratova, V G; Krapivko, Iu K

    2009-12-01

    Factor analysis was used to study the pattern of relationships of a number of hematological parameters in clinically healthy young subjects and in patients with moderate anemia. The level of total hemoglobin and the concentration of red blood cells were ascertained to control blood oxygen-transporting function in not full measure and these might be referred to as basic characteristics only conventionally. To clarify the picture, these criteria should be supplemented by the information on other parameters. It is concluded that the introduction of the ratio of a number of hemoglobin derivatives, blood oxygen regimen and acid-base balance can substantially increase the validity of clinical opinions as to this blood function.

  15. Renormalization Group Tutorial

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.

    2004-01-01

    Complex physical systems sometimes have statistical behavior characterized by power- law dependence on the parameters of the system and spatial variability with no particular characteristic scale as the parameters approach critical values. The renormalization group (RG) approach was developed in the fields of statistical mechanics and quantum field theory to derive quantitative predictions of such behavior in cases where conventional methods of analysis fail. Techniques based on these ideas have since been extended to treat problems in many different fields, and in particular, the behavior of turbulent fluids. This lecture will describe a relatively simple but nontrivial example of the RG approach applied to the diffusion of photons out of a stellar medium when the photons have wavelengths near that of an emission line of atoms in the medium.

  16. Renormalization group evolution of the universal theories EFT

    DOE PAGES

    Wells, James D.; Zhang, Zhengkang

    2016-06-21

    The conventional oblique parameters analyses of precision electroweak data can be consistently cast in the modern framework of the Standard Model effective field theory (SMEFT) when restrictions are imposed on the SMEFT parameter space so that it describes universal theories. However, the usefulness of such analyses is challenged by the fact that universal theories at the scale of new physics, where they are matched onto the SMEFT, can flow to nonuniversal theories with renormalization group (RG) evolution down to the electroweak scale, where precision observables are measured. The departure from universal theories at the electroweak scale is not arbitrary, butmore » dictated by the universal parameters at the matching scale. But to define oblique parameters, and more generally universal parameters at the electroweak scale that directly map onto observables, additional prescriptions are needed for the treatment of RG-induced nonuniversal effects. Finally, we perform a RG analysis of the SMEFT description of universal theories, and discuss the impact of RG on simplified, universal-theories-motivated approaches to fitting precision electroweak and Higgs data.« less

  17. Renormalization group evolution of the universal theories EFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, James D.; Zhang, Zhengkang

    The conventional oblique parameters analyses of precision electroweak data can be consistently cast in the modern framework of the Standard Model effective field theory (SMEFT) when restrictions are imposed on the SMEFT parameter space so that it describes universal theories. However, the usefulness of such analyses is challenged by the fact that universal theories at the scale of new physics, where they are matched onto the SMEFT, can flow to nonuniversal theories with renormalization group (RG) evolution down to the electroweak scale, where precision observables are measured. The departure from universal theories at the electroweak scale is not arbitrary, butmore » dictated by the universal parameters at the matching scale. But to define oblique parameters, and more generally universal parameters at the electroweak scale that directly map onto observables, additional prescriptions are needed for the treatment of RG-induced nonuniversal effects. Finally, we perform a RG analysis of the SMEFT description of universal theories, and discuss the impact of RG on simplified, universal-theories-motivated approaches to fitting precision electroweak and Higgs data.« less

  18. Passive Fully Polarimetric W-Band Millimeter-Wave Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernacki, Bruce E.; Kelly, James F.; Sheen, David M.

    2012-04-01

    We present the theory, design, and experimental results obtained from a scanning passive W-band fully polarimetric imager. Passive millimeter-wave imaging offers persistent day/nighttime imaging and the ability to penetrate dust, clouds and other obscurants, including clothing and dry soil. The single-pixel scanning imager includes both far-field and near-field fore-optics for investigation of polarization phenomena. Using both fore-optics, a variety of scenes including natural and man-made objects was imaged and these results are presented showing the utility of polarimetric imaging for anomaly detection. Analysis includes conventional Stokes-parameter based approaches as well as multivariate image analysis methods.

  19. Correlations Between the Contributions of Individual IVS Analysis Centers

    NASA Technical Reports Server (NTRS)

    Bockmann, Sarah; Artz, Thomas; Nothnagel, Axel

    2010-01-01

    Within almost all space-geodetic techniques, contributions of different analysis centers (ACs) are combined in order to improve the robustness of the final product. So far, the contributing series are assumed to be independent as each AC processes the observations in different ways. However, the series cannot be completely independent as each analyst uses the same set of original observations and many applied models are subject to conventions used by each AC. In this paper, it is shown that neglecting correlations between the contributing series yields too optimistic formal errors and small, but insignificant, errors in the estimated parameters derived from the adjustment of the combined solution.

  20. Stochastic Investigation of Natural Frequency for Functionally Graded Plates

    NASA Astrophysics Data System (ADS)

    Karsh, P. K.; Mukhopadhyay, T.; Dey, S.

    2018-03-01

    This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.

  1. Winglet effects on the flutter of twin-engine-transport type wing

    NASA Technical Reports Server (NTRS)

    Bhatia, K. G.; Nagaraja, K. S.; Ruhlin, C. L.

    1984-01-01

    Flutter characteristics of a cantilevered high aspect ratio wing with winglet were investigated. The configuration represented a current technology, twin-engine airplane. A low-speed and a high-speed model were used to evaluate compressibility effects through transonic Mach numbers and a wide range of mass-density ratios. Four flutter mechanisms were obtained in test, as well as analysis from various combinations of configuration parameters. The coupling between wing tip vertical and chordwise motions was shown to have significant effect under some conditions. It is concluded that, for the flutter model configurations studied, the winglet related flutter was amenable to the conventional flutter analysis techniques.

  2. Soil biota and agriculture production in conventional and organic farming

    NASA Astrophysics Data System (ADS)

    Schrama, Maarten; de Haan, Joj; Carvalho, Sabrina; Kroonen, Mark; Verstegen, Harry; Van der Putten, Wim

    2015-04-01

    Sustainable food production for a growing world population requires a healthy soil that can buffer environmental extremes and minimize its losses. There are currently two views on how to achieve this: by intensifying conventional agriculture or by developing organically based agriculture. It has been established that yields of conventional agriculture can be 20% higher than of organic agriculture. However, high yields of intensified conventional agriculture trade off with loss of soil biodiversity, leaching of nutrients, and other unwanted ecosystem dis-services. One of the key explanations for the loss of nutrients and GHG from intensive agriculture is that it results in high dynamics of nutrient losses, and policy has aimed at reducing temporal variation. However, little is known about how different agricultural practices affect spatial variation, and it is unknown how soil fauna acts this. In this study we compare the spatial and temporal variation of physical, chemical and biological parameters in a long term (13-year) field experiment with two conventional farming systems (low and medium organic matter input) and one organic farming system (high organic matter input) and we evaluate the impact on ecosystem services that these farming systems provide. Soil chemical (N availability, N mineralization, pH) and soil biological parameters (nematode abundance, bacterial and fungal biomass) show considerably higher spatial variation under conventional farming than under organic farming. Higher variation in soil chemical and biological parameters coincides with the presence of 'leaky' spots (high nitrate leaching) in conventional farming systems, which shift unpredictably over the course of one season. Although variation in soil physical factors (soil organic matter, soil aggregation, soil moisture) was similar between treatments, but averages were higher under organic farming, indicating more buffered conditions for nutrient cycling. All these changes coincide with pronounced shifts in soil fauna composition (nematodes, earthworms) and an increase in earthworm activity. Hence, more buffered conditions and shifts in soil fauna composition under organic farming may underlie the observed reduction in spatial variation of soil chemical and biological parameters, which in turn correlates positively with a long-term increase in yield. Our study highlights the need for both policymakers and farmers alike to support spatial stability-increasing farming.

  3. Hetero-Material Gate Doping-Less Tunnel FET and Its Misalignment Effects on Analog/RF Parameters

    NASA Astrophysics Data System (ADS)

    Anand, Sunny; Sarin, R. K.

    2018-03-01

    In this paper, with the use of a hetero-material gate technique, a tunnel field-effect transistor (TFET) subject to charge plasma technique is proposed, named as hetero-material gate doping-less tunnel FET (HMG-DLTFET) and a brief study has been done on the effects due to misalignment of the bottom gate towards drain (GMAD) and towards source (GMAS). The proposed devices provide better performance as the drive current increased by three times as compared to conventional doping-less TFET (DLTFET). The results are then analyzed and compared with conventional doped hetero-material gate double-gate tunnel FET (HMG-DGTFET). The analog/radiofrequency (RF) performance has been studied for both devices and comparative analysis has been done for different parameters such as drain current (I D), transconductance (g m), output conductance (g d), total gate capacitance (C gg) and cutoff frequency (f T). Both devices performed similarly in different misalignment configurations. When the bottom gate is perfectly aligned, the best performance is observed for both devices, but the doping-less device gives slightly more freedom for fabrication engineers as the amount of tolerance for HMG-DLTFET is better than that of HMG-DGTFET.

  4. Taguchi optimization of bismuth-telluride based thermoelectric cooler

    NASA Astrophysics Data System (ADS)

    Anant Kishore, Ravi; Kumar, Prashant; Sanghadasa, Mohan; Priya, Shashank

    2017-07-01

    In the last few decades, considerable effort has been made to enhance the figure-of-merit (ZT) of thermoelectric (TE) materials. However, the performance of commercial TE devices still remains low due to the fact that the module figure-of-merit not only depends on the material ZT, but also on the operating conditions and configuration of TE modules. This study takes into account comprehensive set of parameters to conduct the numerical performance analysis of the thermoelectric cooler (TEC) using a Taguchi optimization method. The Taguchi method is a statistical tool that predicts the optimal performance with a far less number of experimental runs than the conventional experimental techniques. Taguchi results are also compared with the optimized parameters obtained by a full factorial optimization method, which reveals that the Taguchi method provides optimum or near-optimum TEC configuration using only 25 experiments against 3125 experiments needed by the conventional optimization method. This study also shows that the environmental factors such as ambient temperature and cooling coefficient do not significantly affect the optimum geometry and optimum operating temperature of TECs. The optimum TEC configuration for simultaneous optimization of cooling capacity and coefficient of performance is also provided.

  5. Advances in tribological testing of artificial joint biomaterials using multidirectional pin-on-disk testers

    PubMed Central

    Baykal, D.; Siskey, R.S.; Haider, H.; Saikko, V.; Ahlroos, T.; Kurtz, S.M.

    2013-01-01

    The introduction of numerous formulations of Ultra-high molecular weight polyethylene (UHMWPE), which is widely used as a bearing material in orthopedic implants, necessitated screening of bearing couples to identify promising iterations for expensive joint simulations. Pin-on-disk (POD) testers capable of multidirectional sliding can correctly rank formulations of UHMWPE with respect to their predictive in vivo wear behavior. However, there are still uncertainties regarding POD test parameters for facilitating clinically relevant wear mechanisms of UHMWPE. Studies on the development of POD testing were briefly summarized. We systematically reviewed wear rate data of UHMWPE generated by POD testers. To determine if POD testing was capable of correctly ranking bearings and if test parameters outlined in ASTM F732 enabled differentiation between wear behavior of various formulations, mean wear rates of non-irradiated, conventional (25–50 kGy) and highly crosslinked (≥90 kGy) UHMWPE were grouped and compared. The mean wear rates of non-irradiated, conventional and highly crosslinked UHMWPEs were 7.03, 5.39 and 0.67 mm3/MC. Based on studies that complied with the guidelines of ASTM F732, the mean wear rates of non-irradiated, conventional and highly crosslinked UHMWPEs were 0.32, 0.21 and 0.04 mm3/km, respectively. In both sets of results, the mean wear rate of highly crosslinked UHMPWE was smaller than both conventional and non-irradiated UHMWPEs (p<0.05). Thus, POD testers can compare highly crosslinked and conventional UHMWPEs despite different test parameters. Narrowing the allowable range for standardized test parameters could improve sensitivity of multi-axial testers in correctly ranking materials. PMID:23831149

  6. Bayesian structural equation modeling: a more flexible representation of substantive theory.

    PubMed

    Muthén, Bengt; Asparouhov, Tihomir

    2012-09-01

    This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988.

  7. Automated Voxel-Based Analysis of Volumetric Dynamic Contrast-Enhanced CT Data Improves Measurement of Serial Changes in Tumor Vascular Biomarkers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coolens, Catherine, E-mail: catherine.coolens@rmp.uhn.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario

    2015-01-01

    Objectives: Development of perfusion imaging as a biomarker requires more robust methodologies for quantification of tumor physiology that allow assessment of volumetric tumor heterogeneity over time. This study proposes a parametric method for automatically analyzing perfused tissue from volumetric dynamic contrast-enhanced (DCE) computed tomography (CT) scans and assesses whether this 4-dimensional (4D) DCE approach is more robust and accurate than conventional, region-of-interest (ROI)-based CT methods in quantifying tumor perfusion with preliminary evaluation in metastatic brain cancer. Methods and Materials: Functional parameter reproducibility and analysis of sensitivity to imaging resolution and arterial input function were evaluated in image sets acquired from amore » 320-slice CT with a controlled flow phantom and patients with brain metastases, whose treatments were planned for stereotactic radiation surgery and who consented to a research ethics board-approved prospective imaging biomarker study. A voxel-based temporal dynamic analysis (TDA) methodology was used at baseline, at day 7, and at day 20 after treatment. The ability to detect changes in kinetic parameter maps in clinical data sets was investigated for both 4D TDA and conventional 2D ROI-based analysis methods. Results: A total of 7 brain metastases in 3 patients were evaluated over the 3 time points. The 4D TDA method showed improved spatial efficacy and accuracy of perfusion parameters compared to ROI-based DCE analysis (P<.005), with a reproducibility error of less than 2% when tested with DCE phantom data. Clinically, changes in transfer constant from the blood plasma into the extracellular extravascular space (K{sub trans}) were seen when using TDA, with substantially smaller errors than the 2D method on both day 7 post radiation surgery (±13%; P<.05) and by day 20 (±12%; P<.04). Standard methods showed a decrease in K{sub trans} but with large uncertainty (111.6 ± 150.5) %. Conclusions: Parametric voxel-based analysis of 4D DCE CT data resulted in greater accuracy and reliability in measuring changes in perfusion CT-based kinetic metrics, which have the potential to be used as biomarkers in patients with metastatic brain cancer.« less

  8. Behaviour of fractional loop delay zero crossing digital phase locked loop (FR-ZCDPLL)

    NASA Astrophysics Data System (ADS)

    Nasir, Qassim

    2018-01-01

    This article analyses the performance of the first-order zero crossing digital phase locked loops (FR-ZCDPLL) when fractional loop delay is added to loop. The non-linear dynamics of the loop is presented, analysed and examined through bifurcation behaviour. Numerical simulation of the loop is conducted to proof the mathematical analysis of the loop operation. The results of the loop simulation show that the proposed FR-ZCDPLL has enhanced the performance compared to the conventional zero crossing DPLL in terms of wider lock range, captured range and stable operation region. In addition, extensive experimental simulation was conducted to find the optimum loop parameters for different loop environmental conditions. The addition of the fractional loop delay network in the conventional loop also reduces the phase jitter and its variance especially when the signal-to-noise ratio is low.

  9. Cost Effectiveness Analysis of Quasi-In-Motion Wireless Power Transfer for Plug-In Hybrid Electric Transit Buses from Fleet Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeff; Brooker, Aaron

    This study evaluated the costs and benefits associated with the use of stationary-wireless-power-transfer-enabled plug-in hybrid electric buses and determined the cost effectiveness relative to conventional buses and hybrid electric buses. A factorial design was performed over a number of different battery sizes, charging power levels, and f bus stop charging stations. The net present costs were calculated for each vehicle design and provided the basis for design evaluation. In all cases, given the assumed economic conditions, the conventional bus achieved the lowest net present cost while the optimal plug-in hybrid electric bus scenario beat out the hybrid electric comparison scenario.more » The parameter sensitivity was also investigated under favorable and unfavorable market penetration assumptions.« less

  10. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  11. Investigation on gas medium parameters for an ArF excimer laser through orthogonal experimental design

    NASA Astrophysics Data System (ADS)

    Song, Xingliang; Sha, Pengfei; Fan, Yuanyuan; Jiang, R.; Zhao, Jiangshan; Zhou, Yi; Yang, Junhong; Xiong, Guangliang; Wang, Yu

    2018-02-01

    Due to complex kinetics of formation and loss mechanisms, such as ion-ion recombination reaction, neutral species harpoon reaction, excited state quenching and photon absorption, as well as their interactions, the performance behavior of different laser gas medium parameters for excimer laser varies greatly. Therefore, the effects of gas composition and total gas pressure on excimer laser performance attract continual research studies. In this work, orthogonal experimental design (OED) is used to investigate quantitative and qualitative correlations between output laser energy characteristics and gas medium parameters for an ArF excimer laser with plano-plano optical resonator operation. Optimized output laser energy with good pulse to pulse stability can be obtained effectively by proper selection of the gas medium parameters, which makes the most of the ArF excimer laser device. Simple and efficient method for gas medium optimization is proposed and demonstrated experimentally, which provides a global and systematic solution. By detailed statistical analysis, the significance sequence of relevant parameter factors and the optimized composition for gas medium parameters are obtained. Compared with conventional route of varying single gas parameter factor sequentially, this paper presents a more comprehensive way of considering multivariables simultaneously, which seems promising in striking an appropriate balance among various complicated parameters for power scaling study of an excimer laser.

  12. Two-compartment modeling of tissue microcirculation revisited.

    PubMed

    Brix, Gunnar; Salehi Ravesh, Mona; Griebel, Jürgen

    2017-05-01

    Conventional two-compartment modeling of tissue microcirculation is used for tracer kinetic analysis of dynamic contrast-enhanced (DCE) computed tomography or magnetic resonance imaging studies although it is well-known that the underlying assumption of an instantaneous mixing of the administered contrast agent (CA) in capillaries is far from being realistic. It was thus the aim of the present study to provide theoretical and computational evidence in favor of a conceptually alternative modeling approach that makes it possible to characterize the bias inherent to compartment modeling and, moreover, to approximately correct for it. Starting from a two-region distributed-parameter model that accounts for spatial gradients in CA concentrations within blood-tissue exchange units, a modified lumped two-compartment exchange model was derived. It has the same analytical structure as the conventional two-compartment model, but indicates that the apparent blood flow identifiable from measured DCE data is substantially overestimated, whereas the three other model parameters (i.e., the permeability-surface area product as well as the volume fractions of the plasma and interstitial distribution space) are unbiased. Furthermore, a simple formula was derived to approximately compute a bias-corrected flow from the estimates of the apparent flow and permeability-surface area product obtained by model fitting. To evaluate the accuracy of the proposed modeling and bias correction method, representative noise-free DCE curves were analyzed. They were simulated for 36 microcirculation and four input scenarios by an axially distributed reference model. As analytically proven, the considered two-compartment exchange model is structurally identifiable from tissue residue data. The apparent flow values estimated for the 144 simulated tissue/input scenarios were considerably biased. After bias-correction, the deviations between estimated and actual parameter values were (11.2 ± 6.4) % (vs. (105 ± 21) % without correction) for the flow, (3.6 ± 6.1) % for the permeability-surface area product, (5.8 ± 4.9) % for the vascular volume and (2.5 ± 4.1) % for the interstitial volume; with individual deviations of more than 20% being the exception and just marginal. Increasing the duration of CA administration only had a statistically significant but opposite effect on the accuracy of the estimated flow (declined) and intravascular volume (improved). Physiologically well-defined tissue parameters are structurally identifiable and accurately estimable from DCE data by the conceptually modified two-compartment model in combination with the bias correction. The accuracy of the bias-corrected flow is nearly comparable to that of the three other (theoretically unbiased) model parameters. As compared to conventional two-compartment modeling, this feature constitutes a major advantage for tracer kinetic analysis of both preclinical and clinical DCE imaging studies. © 2017 American Association of Physicists in Medicine.

  13. Reliability and Reproducibility of Advanced ECG Parameters in Month-to-Month and Year-to-Year Recordings in Healthy Subjects

    NASA Technical Reports Server (NTRS)

    Starc, Vito; Abughazaleh, Ahmed S.; Schlegel, Todd T.

    2014-01-01

    Advanced resting ECG parameters such the spatial mean QRS-T angle and the QT variability index (QTVI) have important diagnostic and prognostic utility, but their reliability and reproducibility (R&R) are not well characterized. We hypothesized that the spatial QRS-T angle would have relatively higher R&R than parameters such as QTVI that are more responsive to transient changes in the autonomic nervous system. The R&R of several conventional and advanced ECG para-meters were studied via intraclass correlation coefficients (ICCs) and coefficients of variation (CVs) in: (1) 15 supine healthy subjects from month-to-month; (2) 27 supine healthy subjects from year-to-year; and (3) 25 subjects after transition from the supine to the seated posture. As hypothesized, for the spatial mean QRS-T angle and many conventional ECG parameters, ICCs we-re higher, and CVs lower than QTVI, suggesting that the former parameters are more reliable and reproducible.

  14. Stochastic seismic response of building with super-elastic damper

    NASA Astrophysics Data System (ADS)

    Gur, Sourav; Mishra, Sudib Kumar; Roy, Koushik

    2016-05-01

    Hysteretic yield dampers are widely employed for seismic vibration control of buildings. An improved version of such damper has been proposed recently by exploiting the superelastic force-deformation characteristics of the Shape-Memory-Alloy (SMA). Although a number of studies have illustrated the performance of such damper, precise estimate of the optimal parameters and performances, along with the comparison with the conventional yield damper is lacking. Presently, the optimal parameters for the superelastic damper are proposed by conducting systematic design optimization, in which, the stochastic response serves as the objective function, evaluated through nonlinear random vibration analysis. These optimal parameters can be employed to establish an initial design for the SMA-damper. Further, a comparison among the optimal responses is also presented in order to assess the improvement that can be achieved by the superelastic damper over the yield damper. The consistency of the improvements is also checked by considering the anticipated variation in the system parameters as well as seismic loading condition. In spite of the improved performance of super-elastic damper, the available variant of SMA(s) is quite expensive to limit their applicability. However, recently developed ferrous SMA are expected to offer even superior performance along with improved cost effectiveness, that can be studied through a life cycle cost analysis in future work.

  15. Formulation of dynamical theory of X-ray diffraction for perfect crystals in the Laue case using the Riemann surface.

    PubMed

    Saka, Takashi

    2016-05-01

    The dynamical theory for perfect crystals in the Laue case was reformulated using the Riemann surface, as used in complex analysis. In the two-beam approximation, each branch of the dispersion surface is specified by one sheet of the Riemann surface. The characteristic features of the dispersion surface are analytically revealed using four parameters, which are the real and imaginary parts of two quantities specifying the degree of departure from the exact Bragg condition and the reflection strength. By representing these parameters on complex planes, these characteristics can be graphically depicted on the Riemann surface. In the conventional case, the absorption is small and the real part of the reflection strength is large, so the formulation is the same as the traditional analysis. However, when the real part of the reflection strength is small or zero, the two branches of the dispersion surface cross, and the dispersion relationship becomes similar to that of the Bragg case. This is because the geometrical relationships among the parameters are similar in both cases. The present analytical method is generally applicable, irrespective of the magnitudes of the parameters. Furthermore, the present method analytically revealed many characteristic features of the dispersion surface and will be quite instructive for further numerical calculations of rocking curves.

  16. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  17. Meta-analysis using Dirichlet process.

    PubMed

    Muthukumarana, Saman; Tiwari, Ram C

    2016-02-01

    This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.

  18. Linear least squares approach for evaluating crack tip fracture parameters using isochromatic and isoclinic data from digital photoelasticity

    NASA Astrophysics Data System (ADS)

    Patil, Prataprao; Vyasarayani, C. P.; Ramji, M.

    2017-06-01

    In this work, digital photoelasticity technique is used to estimate the crack tip fracture parameters for different crack configurations. Conventionally, only isochromatic data surrounding the crack tip is used for SIF estimation, but with the advent of digital photoelasticity, pixel-wise availability of both isoclinic and isochromatic data could be exploited for SIF estimation in a novel way. A linear least square approach is proposed to estimate the mixed-mode crack tip fracture parameters by solving the multi-parameter stress field equation. The stress intensity factor (SIF) is extracted from those estimated fracture parameters. The isochromatic and isoclinic data around the crack tip is estimated using the ten-step phase shifting technique. To get the unwrapped data, the adaptive quality guided phase unwrapping algorithm (AQGPU) has been used. The mixed mode fracture parameters, especially SIF are estimated for specimen configurations like single edge notch (SEN), center crack and straight crack ahead of inclusion using the proposed algorithm. The experimental SIF values estimated using the proposed method are compared with analytical/finite element analysis (FEA) results, and are found to be in good agreement.

  19. Search-based model identification of smart-structure damage

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Macalou, A.

    1991-01-01

    This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.

  20. The association between the maximum step length test and the walking efficiency in children with cerebral palsy.

    PubMed

    Kimoto, Minoru; Okada, Kyoji; Sakamoto, Hitoshi; Kondou, Takanori

    2017-05-01

    [Purpose] To improve walking efficiency could be useful for reducing fatigue and extending possible period of walking in children with cerebral palsy (CP). For this purpose, current study compared conventional parameters of gross motor performance, step length, and cadence in the evaluation of walking efficiency in children with CP. [Subjects and Methods] Thirty-one children with CP (21 boys, 10 girls; mean age, 12.3 ± 2.7 years) participated. Parameters of gross motor performance, including the maximum step length (MSL), maximum side step length, step number, lateral step up number, and single leg standing time, were measured in both dominant and non-dominant sides. Spatio-temporal parameters of walking, including speed, step length, and cadence, were calculated. Total heart beat index (THBI), a parameter of walking efficiency, was also calculated from heartbeats and walking distance in 10 minutes of walking. To analyze the relationships between these parameters and the THBI, the coefficients of determination were calculated using stepwise analysis. [Results] The MSL of the dominant side best accounted for the THBI (R 2 =0.759). [Conclusion] The MSL of the dominant side was the best explanatory parameter for walking efficiency in children with CP.

  1. Cattaneo-Christov based study of {TiO}_2 -CuO/EG Casson hybrid nanofluid flow over a stretching surface with entropy generation

    NASA Astrophysics Data System (ADS)

    Jamshed, Wasim; Aziz, Asim

    2018-06-01

    In the present research, a simplified mathematical model is presented to study the heat transfer and entropy generation analysis of thermal system containing hybrid nanofluid. Nanofluid occupies the space over an infinite horizontal surface and the flow is induced by the non-linear stretching of surface. A uniform transverse magnetic field, Cattaneo-Christov heat flux model and thermal radiation effects are also included in the present study. The similarity technique is employed to reduce the governing non-linear partial differential equations to a set of ordinary differential equation. Keller Box numerical scheme is then used to approximate the solutions for the thermal analysis. Results are presented for conventional copper oxide-ethylene glycol (CuO-EG) and hybrid titanium-copper oxide/ethylene glycol ({TiO}_2 -CuO/EG) nanofluids. The spherical, hexahedron, tetrahedron, cylindrical, and lamina-shaped nanoparticles are considered in the present analysis. The significant findings of the study is the enhanced heat transfer capability of hybrid nanofluids over the conventional nanofluids, greatest heat transfer rate for the smallest value of the shape factor parameter and the increase in Reynolds number and Brinkman number increases the overall entropy of the system.

  2. Transient analysis of a superconducting AC generator using the compensated 2-D model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chun, Y.D.; Lee, H.W.; Lee, J.

    1999-09-01

    A SCG has many advantages over conventional generators, such as reduction in width and size, improvement in efficiency, and better steady-state stability. The paper presents a 2-D transient analysis of a superconducting AC generator (SCG) using the finite element method (FEM). The compensated 2-D model obtained by lengthening the airgap of the original 2-D model is proposed for the accurate and efficient transient analysis. The accuracy of the compensated 2-D model is verified by the small error 6.4% compared to experimental data. The transient characteristics of the 30 KVA SCG model have been investigated in detail and the damper performancemore » on various design parameters is examined.« less

  3. Principal Component Analysis for pulse-shape discrimination of scintillation radiation detectors

    NASA Astrophysics Data System (ADS)

    Alharbi, T.

    2016-01-01

    In this paper, we report on the application of Principal Component analysis (PCA) for pulse-shape discrimination (PSD) of scintillation radiation detectors. The details of the method are described and the performance of the method is experimentally examined by discriminating between neutrons and gamma-rays with a liquid scintillation detector in a mixed radiation field. The performance of the method is also compared against that of the conventional charge-comparison method, demonstrating the superior performance of the method particularly at low light output range. PCA analysis has the important advantage of automatic extraction of the pulse-shape characteristics which makes the PSD method directly applicable to various scintillation detectors without the need for the adjustment of a PSD parameter.

  4. Metabolomics of Breast Cancer Using High-Resolution Magic Angle Spinning Magnetic Resonance Spectroscopy: Correlations with 18F-FDG Positron Emission Tomography-Computed Tomography, Dynamic Contrast-Enhanced and Diffusion-Weighted Imaging MRI.

    PubMed

    Yoon, Haesung; Yoon, Dahye; Yun, Mijin; Choi, Ji Soo; Park, Vivian Youngjean; Kim, Eun-Kyung; Jeong, Joon; Koo, Ja Seung; Yoon, Jung Hyun; Moon, Hee Jung; Kim, Suhkmann; Kim, Min Jung

    2016-01-01

    Our goal in this study was to find correlations between breast cancer metabolites and conventional quantitative imaging parameters using high-resolution magic angle spinning (HR-MAS) magnetic resonance spectroscopy (MRS) and to find breast cancer subgroups that show high correlations between metabolites and imaging parameters. Between August 2010 and December 2013, we included 53 female patients (mean age 49.6 years; age range 32-75 years) with a total of 53 breast lesions assessed by the Breast Imaging Reporting and Data System. They were enrolled under the following criteria: breast lesions larger than 1 cm in diameter which 1) were suspicious for malignancy on mammography or ultrasound (US), 2) were pathologically confirmed to be breast cancer with US-guided core-needle biopsy (CNB) 3) underwent 3 Tesla MRI with dynamic contrast-enhanced (DCE) and diffusion-weighted imaging (DWI) and positron emission tomography-computed tomography (PET-CT), and 4) had an attainable immunohistochemistry profile from CNB. We acquired spectral data by HR-MAS MRS with CNB specimens and expressed the data as relative metabolite concentrations. We compared the metabolites with the signal enhancement ratio (SER), maximum standardized FDG uptake value (SUV max), apparent diffusion coefficient (ADC), and histopathologic prognostic factors for correlation. We calculated Spearman correlations and performed a partial least squares-discriminant analysis (PLS-DA) to further classify patient groups into subgroups to find correlation differences between HR-MAS spectroscopic values and conventional imaging parameters. In a multivariate analysis, the PLS-DA models built with HR-MAS MRS metabolic profiles showed visible discrimination between high and low SER, SUV, and ADC. In luminal subtype breast cancer, compared to all cases, high SER, ADV, and SUV were more closely clustered by visual assessment. Multiple metabolites were correlated with SER and SUV in all cases. Multiple metabolites showed correlations with SER and SUV in the ER positive, HER2 negative, and Ki-67 negative groups. High levels of PC, choline, and glycine acquired from HR-MAS MRS using CNB specimens were noted in the high SER group via DCE MRI and the high SUV group via PET-CT, with significant correlations between choline and SER and between PC and SUV. Further studies should investigate whether HR-MAS MRS using CNB specimens can provide similar or more prognostic information than conventional quantitative imaging parameters.

  5. Sperm with large nuclear vacuoles and semen quality in the evaluation of male infertility.

    PubMed

    Komiya, Akira; Watanabe, Akihiko; Kawauchi, Yoko; Fuse, Hideki

    2013-02-01

    This study compared the sperm nuclear vacuoles and semen quality in the evaluation of male infertility. One hundred and forty-two semen samples were obtained from patients who visited the Male Infertility Clinic at Toyama University Hospital. Semen samples were evaluated by conventional semen analyses and the Sperm Motility Analysis System (SMAS). In addition, spermatozoa were analyzed at 3,700-6,150x magnification on an inverted microscope equipped with DIC/Nomarski differential interference contrast optics. A large nuclear vacuole (LNV) was defined as one or more vacuoles with the maximum diameter showing > 50% width of the sperm head. The percentage of spermatozoa with LNV (% LNV) was calculated for each sample. Correlations between the % LNV and parameters in SMAS and conventional semen analyses were analyzed. Processed motile spermatozoa from each sample were evaluated. The mean age of patients was 35 years old. Semen volume was 2.9 ± 1.6mL (0.1-11.0; mean ± standard deviation, minimum-maximum), sperm count was 39.3 ± 54.9 (x10(6)/mL, 0.01-262.0), sperm motility was 25.1 ± 17.8% (0-76.0), and normal sperm morphology was 10.3 ± 10.1% (0-49.0). After motile spermatozoa selection, we could evaluate % LNV in 125 ejaculates (88.0%) and at least one spermatozoon with LNV was observed in 118 ejaculates (94.4%). The percentage of spermatozoa with LNV was 28.0 ± 22.4% (0-100) and % LNV increased significantly when semen quality decreased. The correlation between the % LNV and the semen parameters was weak to moderate; correlation coefficients were -0.3577 in sperm count (p < 0.0001), -0.2368 in sperm motility (p = 0.0084), -0.2769 in motile sperm count (p = 0.019), -0.2419 in total motile sperm count (p = 0.0070), and -0.1676 in normal sperm morphology (p = 0.0639). The % LNV did not show a significant correlation with the SMAS parameters except for weak correlation to beat/cross frequency (r = -0.2414, p = 0.0071). The percentage of spermatozoa with LNV did not have a strong correlation with parameters in conventional semen analysis and SMAS in the patients with male infertility; however, a certain level of negative influence of LNV to sperm quality cannot be excluded.

  6. Probabilistic seismic demand analysis using advanced ground motion intensity measures

    USGS Publications Warehouse

    Tothong, P.; Luco, N.

    2007-01-01

    One of the objectives in performance-based earthquake engineering is to quantify the seismic reliability of a structure at a site. For that purpose, probabilistic seismic demand analysis (PSDA) is used as a tool to estimate the mean annual frequency of exceeding a specified value of a structural demand parameter (e.g. interstorey drift). This paper compares and contrasts the use, in PSDA, of certain advanced scalar versus vector and conventional scalar ground motion intensity measures (IMs). One of the benefits of using a well-chosen IM is that more accurate evaluations of seismic performance are achieved without the need to perform detailed ground motion record selection for the nonlinear dynamic structural analyses involved in PSDA (e.g. record selection with respect to seismic parameters such as earthquake magnitude, source-to-site distance, and ground motion epsilon). For structural demands that are dominated by a first mode of vibration, using inelastic spectral displacement (Sdi) can be advantageous relative to the conventionally used elastic spectral acceleration (Sa) and the vector IM consisting of Sa and epsilon (??). This paper demonstrates that this is true for ordinary and for near-source pulse-like earthquake records. The latter ground motions cannot be adequately characterized by either Sa alone or the vector of Sa and ??. For structural demands with significant higher-mode contributions (under either of the two types of ground motions), even Sdi (alone) is not sufficient, so an advanced scalar IM that additionally incorporates higher modes is used.

  7. Comparison of prostate contours between conventional stepping transverse imaging and Twister-based sagittal imaging in permanent interstitial prostate brachytherapy.

    PubMed

    Kawakami, Shogo; Ishiyama, Hiromichi; Satoh, Takefumi; Tsumura, Hideyasu; Sekiguchi, Akane; Takenaka, Kouji; Tabata, Ken-Ichi; Iwamura, Masatsugu; Hayakawa, Kazushige

    2017-08-01

    To compare prostate contours on conventional stepping transverse image acquisitions with those on twister-based sagittal image acquisitions. Twenty prostate cancer patients who were planned to have permanent interstitial prostate brachytherapy were prospectively accrued. A transrectal ultrasonography probe was inserted, with the patient in lithotomy position. Transverse images were obtained with stepping movement of the transverse transducer. In the same patient, sagittal images were also obtained through rotation of the sagittal transducer using the "Twister" mode. The differences of prostate size among the two types of image acquisitions were compared. The relationships among the difference of the two types of image acquisitions, dose-volume histogram (DVH) parameters on the post-implant computed tomography (CT) analysis, as well as other factors were analyzed. The sagittal image acquisitions showed a larger prostate size compared to the transverse image acquisitions especially in the anterior-posterior (AP) direction ( p < 0.05). Interestingly, relative size of prostate apex in AP direction in sagittal image acquisitions compared to that in transverse image acquisitions was correlated to DVH parameters such as D 90 ( R = 0.518, p = 0.019), and V 100 ( R = 0.598, p = 0.005). There were small but significant differences in the prostate contours between the transverse and the sagittal planning image acquisitions. Furthermore, our study suggested that the differences between the two types of image acquisitions might correlated to dosimetric results on CT analysis.

  8. Waters in Croatia between practice and needs: public health challenge.

    PubMed

    Vitale, Ksenija; Marijanović Rajcić, Marija; Senta, Ankica

    2002-08-01

    To describe waters monitoring in Croatia and legislation status for their evaluation, and to present health-relevant data and long-term analysis of the Drava river water, which is used in drinking water production. Survey of databanks of various Croatian institutions related to waters, and physical and chemical analysis of 13 surface water pollutants, applying HRN ISO laboratory methods. Since 1992 until 2000, water systems had 10% of contaminated samples, whereas local community and private water sources had 30% of such samples. Since 1981, 84 waterborne epidemics have been registered, affecting 7,581 people with predominantly gastrointestinal problems. The Drava river monitoring revealed that lead, cadmium, and mercury concentrations have constantly exceeded, whereas nickel and copper remained within allowed values for the Drava river to be classified into the second category of surface waters. Both nitrates and nitrites have been increasing with time, nitrates exceeding and nitrites remaining within guideline values. Total phosphorus and nitrogen concentrations also increased with time, still being below allowed maximum values. Chemical oxygen demand has been decreasing. Alkalinity has been satisfactory. Salt burden has been increasing. Both drinking water quality assessment and surface water monitoring in Croatia use less parameters then recommended by World Health Organization or signed conventions. The quality of Drava water has been improving, but still does not fully conform to the second category of surface water. More parameters should be used in its monitoring, as recommended by EU conventions and laws.

  9. Histogram Analysis of Apparent Diffusion Coefficients for Occult Tonsil Cancer in Patients with Cervical Nodal Metastasis from an Unknown Primary Site at Presentation.

    PubMed

    Choi, Young Jun; Lee, Jeong Hyun; Kim, Hye Ok; Kim, Dae Yoon; Yoon, Ra Gyoung; Cho, So Hyun; Koh, Myeong Ju; Kim, Namkug; Kim, Sang Yoon; Baek, Jung Hwan

    2016-01-01

    To explore the added value of histogram analysis of apparent diffusion coefficient (ADC) values over magnetic resonance (MR) imaging and fluorine 18 ((18)F) fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) for the detection of occult palatine tonsil squamous cell carcinoma (SCC) in patients with cervical nodal metastasis from a cancer of an unknown primary site. The institutional review board approved this retrospective study, and the requirement for informed consent was waived. Differences in the bimodal histogram parameters of the ADC values were assessed among occult palatine tonsil SCC (n = 19), overt palatine tonsil SCC (n = 20), and normal palatine tonsils (n = 20). One-way analysis of variance was used to analyze differences among the three groups. Receiver operating characteristic curve analysis was used to determine the best differentiating parameters. The increased sensitivity of histogram analysis over MR imaging and (18)F-FDG PET/CT for the detection of occult palatine tonsil SCC was evaluated as added value. Histogram analysis showed statistically significant differences in the mean, standard deviation, and 50th and 90th percentile ADC values among the three groups (P < .0045). Occult palatine tonsil SCC had a significantly higher standard deviation for the overall curves, mean and standard deviation of the higher curves, and 90th percentile ADC value, compared with normal palatine tonsils (P < .0167). Receiver operating characteristic curve analysis showed that the standard deviation of the overall curve best delineated occult palatine tonsil SCC from normal palatine tonsils, with a sensitivity of 78.9% (15 of 19 patients) and a specificity of 60% (12 of 20 patients). The added value of ADC histogram analysis was 52.6% over MR imaging alone and 15.8% over combined conventional MR imaging and (18)F-FDG PET/CT. Adding ADC histogram analysis to conventional MR imaging can improve the detection sensitivity for occult palatine tonsil SCC in patients with a cervical nodal metastasis originating from a cancer of an unknown primary site. © RSNA, 2015.

  10. A novel poly(acrylic acid-co-acrylamide)/diatomite composite flocculant with outstanding flocculation performance.

    PubMed

    Xu, Kun; Liu, Yao; Wang, Yang; Tan, Ying; Liang, Xuecheng; Lu, Cuige; Wang, Haiwei; Liu, Xiusheng; Wang, Pixin

    2015-01-01

    Series of anionic flocculants with outstanding flocculation performance, poly(acrylic acid-co-acrylamide)/diatomite composite flocculants (PAAD) were successfully prepared through aqueous solution copolymerization and applied to flocculate from oil-field fracturing waste-water. The structure of PAAD was characterized by Fourier transform infra-red spectroscopy, (13)C nuclear magnetic resonance and X-ray diffraction tests, and its properties were systematically evaluated by viscometer, thermogravimetry analysis and flocculation measurements. Furthermore, the influences of various reaction parameters on the apparent viscosity of flocculant solution were studied, and the optimum synthesis condition was determined. The novel composite flocculants exhibited outstanding flocculation properties. Specifically, the dosage of composite flocculants that could make the transmittance of treated wastewater exceed 90% was only approximately 12-35 ppm, which was far lower than that of conventional flocculants. Meanwhile, the settling time was lower than 5 s, which was similar to that of conventional flocculants. This was because PAAD flocculants had a higher absorption capacity, and larger chain extending space than conventional linear flocculants, which could refrain from the entanglement of linear polymer chains and significantly improve flocculation capacity.

  11. Effects of barefoot and barefoot inspired footwear on knee and ankle loading during running.

    PubMed

    Sinclair, Jonathan

    2014-04-01

    Recreational runners frequently suffer from chronic pathologies. The knee and ankle have been highlighted as common injury sites. Barefoot and barefoot inspired footwear have been cited as treatment modalities for running injuries as opposed to more conventional running shoes. This investigation examined knee and ankle loading in barefoot and barefoot inspired footwear in relation to conventional running shoes. Thirty recreational male runners underwent 3D running analysis at 4.0m·s(-1). Joint moments, patellofemoral contact force and pressure and Achilles tendon forces were compared between footwear. At the knee the results show that barefoot and barefoot inspired footwear were associated with significant reductions in patellofemoral kinetic parameters. The ankle kinetics indicate that barefoot and barefoot inspired footwear were associated with significant increases in Achilles tendon force compared to conventional shoes. Barefoot and barefoot inspired footwear may serve to reduce the incidence of knee injuries in runners although corresponding increases in Achilles tendon loading may induce an injury risk at this tendon. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Usefulness of Speckle-Tracking Imaging for Right Ventricular Assessment after Acute Myocardial Infarction: A Magnetic Resonance Imaging/Echocardiographic Comparison within the Relation between Aldosterone and Cardiac Remodeling after Myocardial Infarction Study.

    PubMed

    Lemarié, Jérémie; Huttin, Olivier; Girerd, Nicolas; Mandry, Damien; Juillière, Yves; Moulin, Frédéric; Lemoine, Simon; Beaumont, Marine; Marie, Pierre-Yves; Selton-Suty, Christine

    2015-07-01

    Right ventricular (RV) dysfunction after acute myocardial infarction (AMI) is frequent and associated with poor prognosis. The complex anatomy of the right ventricle makes its echocardiographic assessment challenging. Quantification of RV deformation by speckle-tracking echocardiography is a widely available and reproducible technique that readily provides an integrated analysis of all segments of the right ventricle. The aim of this study was to investigate the accuracy of conventional echocardiographic parameters and speckle-tracking echocardiographic strain parameters in assessing RV function after AMI, in comparison with cardiac magnetic resonance imaging (CMR). A total of 135 patients admitted for AMI (73 anterior, 62 inferior) were prospectively studied. Right ventricular function was assessed by echocardiography and CMR within 2 to 4 days of hospital admission. Right ventricular dysfunction was defined as CMR RV ejection fraction < 50%. Right ventricular global peak longitudinal systolic strain (GLPSS) was calculated by averaging the strain values of the septal, lateral, and inferior walls. Right ventricular dysfunction was documented in 20 patients. Right ventricular GLPSS was the best echographic correlate of CMR RV ejection fraction (r = -0.459, P < .0001) and possessed good diagnostic value for RV dysfunction (area under the receiver operating characteristic curve [AUROC], 0.724; 95% CI, 0.590-0.857), which was comparable with that of RV fractional area change (AUROC, 0.756; 95% CI, 0.647-0.866). In patients with inferior myocardial infarctions, the AUROCs for RV GLPSS (0.822) and inferolateral strain (0.877) were greater than that observed for RV fractional area change (0.760) Other conventional echocardiographic parameters performed poorly (all AUROCs < 0.700). After AMI, RV GLPSS is the best correlate of CMR RV ejection fraction. In patients with inferior AMIs, RV GLPSS displays even higher diagnostic value than conventional echocardiographic parameters. Copyright © 2015 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  13. Probability weighted moments: Definition and relation to parameters of several distributions expressable in inverse form

    USGS Publications Warehouse

    Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.

    1979-01-01

    Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.

  14. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  15. Development of a Screening Model for Design and Costing of an Innovative Tailored Granular Activated Carbon Technology to Treat Perchlorate-Contaminated Water

    DTIC Science & Technology

    2007-03-01

    column experiments were used to obtain model parameters . Cost data used in the model were based on conventional GAC installations, as modified to...43 Calculation of Parameters ...66 Determination of Parameter Values

  16. VLBI Analysis with the Multi-Technique Software GEOSAT

    NASA Technical Reports Server (NTRS)

    Kierulf, Halfdan Pascal; Andersen, Per-Helge; Boeckmann, Sarah; Kristiansen, Oddgeir

    2010-01-01

    GEOSAT is a multi-technique geodetic analysis software developed at Forsvarets Forsknings Institutt (Norwegian defense research establishment). The Norwegian Mapping Authority has now installed the software and has, together with Forsvarets Forsknings Institutt, adapted the software to deliver datum-free normal equation systems in SINEX format. The goal is to be accepted as an IVS Associate Analysis Center and to provide contributions to the IVS EOP combination on a routine basis. GEOSAT is based on an upper diagonal factorized Kalman filter which allows estimation of time variable parameters like the troposphere and clocks as stochastic parameters. The tropospheric delays in various directions are mapped to tropospheric zenith delay using ray-tracing. Meteorological data from ECMWF with a resolution of six hours is used to perform the ray-tracing which depends both on elevation and azimuth. Other models are following the IERS and IVS conventions. The Norwegian Mapping Authority has submitted test SINEX files produced with GEOSAT to IVS. The results have been compared with the existing IVS combined products. In this paper the outcome of these comparisons is presented.

  17. Clinical and microbiological parameters in patients with self-ligating and conventional brackets during early phase of orthodontic treatment.

    PubMed

    Pejda, Slavica; Varga, Marina Lapter; Milosevic, Sandra Anic; Mestrovic, Senka; Slaj, Martina; Repic, Dario; Bosnjak, Andrija

    2013-01-01

    To determine the effect of different bracket designs (conventional brackets and self-ligating brackets) on periodontal clinical parameters and periodontal pathogens in subgingival plaque. The following inclusion criteria were used: requirement of orthodontic treatment plan starting with alignment and leveling, good general health, healthy periodontium, no antibiotic therapy in the previous 6 months before the beginning of the study, and no smoking. The study sample totaled 38 patients (13 male, 25 female; mean age, 14.6 ± 2.0 years). Patients were divided into two groups with random distribution of brackets. Recording of clinical parameters was done before the placement of the orthodontic appliance (T0) and at 6 weeks (T1), 12 weeks (T2), and 18 weeks (T3) after full bonding of orthodontic appliances. Periodontal pathogens of subgingival microflora were detected at T3 using a commercially available polymerase chain reaction test (micro-Dent test) that contains probes for Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis, Prevotella intermedia, Tannerella forsythia, and Treponema denticola. There was a statistically significant higher prevalence of A actinomycetemcomitans in patients with conventional brackets than in patients with self-ligating brackets, but there was no statistically significant difference for other putative periodontal pathogens. The two different types of brackets did not show statistically significant differences in periodontal clinical parameters. Bracket design does not seem to have a strong influence on periodontal clinical parameters and periodontal pathogens in subgingival plaque. The correlation between some periodontal pathogens and clinical periodontal parameters was weak.

  18. Physically based probabilistic seismic hazard analysis using broadband ground motion simulation: a case study for the Prince Islands Fault, Marmara Sea

    NASA Astrophysics Data System (ADS)

    Mert, Aydin; Fahjan, Yasin M.; Hutchings, Lawrence J.; Pınar, Ali

    2016-08-01

    The main motivation for this study was the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in the Marmara Sea and the disaster risk around the Marmara region, especially in Istanbul. This study provides the results of a physically based probabilistic seismic hazard analysis (PSHA) methodology, using broadband strong ground motion simulations, for sites within the Marmara region, Turkey, that may be vulnerable to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We included the effects of all considerable-magnitude earthquakes. To generate the high-frequency (0.5-20 Hz) part of the broadband earthquake simulation, real, small-magnitude earthquakes recorded by a local seismic array were used as empirical Green's functions. For the frequencies below 0.5 Hz, the simulations were obtained by using synthetic Green's functions, which are synthetic seismograms calculated by an explicit 2D /3D elastic finite difference wave propagation routine. By using a range of rupture scenarios for all considerable-magnitude earthquakes throughout the PIF segments, we produced a hazard calculation for frequencies of 0.1-20 Hz. The physically based PSHA used here followed the same procedure as conventional PSHA, except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes, and this approach utilizes the full rupture of earthquakes along faults. Furthermore, conventional PSHA predicts ground motion parameters by using empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitudes of earthquakes to obtain ground motion parameters. PSHA results were produced for 2, 10, and 50 % hazards for all sites studied in the Marmara region.

  19. Influence of footwear designed to boost energy return on running economy in comparison to a conventional running shoe.

    PubMed

    Sinclair, J; Mcgrath, R; Brook, O; Taylor, P J; Dillon, S

    2016-01-01

    Running economy is a reflection of the amount of inspired oxygen required to maintain a given velocity and is considered a determining factor for running performance. Athletic footwear has been advocated as a mechanism by which running economy can be enhanced. New commercially available footwear has been developed in order to increase energy return, although their efficacy has not been investigated. This study aimed to examine the effects of energy return footwear on running economy in relation to conventional running shoes. Twelve male runners completed 6-min steady-state runs in conventional and energy return footwear. Overall, oxygen consumption (VO2), heart rate, respiratory exchange ratio, shoe comfort and rating of perceived exertion were assessed. Moreover, participants subjectively indicated which shoe condition they preferred for running. Differences in shoe comfort and physiological parameters were examined using Wilcoxon signed-rank tests, whilst shoe preferences were tested using a chi-square analysis. The results showed that VO2 and respiratory exchange ratio were significantly lower, and shoe comfort was significantly greater, in the energy return footwear. Given the relationship between running economy and running performance, these observations indicate that the energy return footwear may be associated with enhanced running performance in comparison to conventional shoes.

  20. Systematic methods for the design of a class of fuzzy logic controllers

    NASA Astrophysics Data System (ADS)

    Yasin, Saad Yaser

    2002-09-01

    Fuzzy logic control, a relatively new branch of control, can be used effectively whenever conventional control techniques become inapplicable or impractical. Various attempts have been made to create a generalized fuzzy control system and to formulate an analytically based fuzzy control law. In this study, two methods, the left and right parameterization method and the normalized spline-base membership function method, were utilized for formulating analytical fuzzy control laws in important practical control applications. The first model was used to design an idle speed controller, while the second was used to control an inverted control problem. The results of both showed that a fuzzy logic control system based on the developed models could be used effectively to control highly nonlinear and complex systems. This study also investigated the application of fuzzy control in areas not fully utilizing fuzzy logic control. Three important practical applications pertaining to the automotive industries were studied. The first automotive-related application was the idle speed of spark ignition engines, using two fuzzy control methods: (1) left and right parameterization, and (2) fuzzy clustering techniques and experimental data. The simulation and experimental results showed that a conventional controller-like performance fuzzy controller could be designed based only on experimental data and intuitive knowledge of the system. In the second application, the automotive cruise control problem, a fuzzy control model was developed using parameters adaptive Proportional plus Integral plus Derivative (PID)-type fuzzy logic controller. Results were comparable to those using linearized conventional PID and linear quadratic regulator (LQR) controllers and, in certain cases and conditions, the developed controller outperformed the conventional PID and LQR controllers. The third application involved the air/fuel ratio control problem, using fuzzy clustering techniques, experimental data, and a conversion algorithm, to develop a fuzzy-based control algorithm. Results were similar to those obtained by recently published conventional control based studies. The influence of the fuzzy inference operators and parameters on performance and stability of the fuzzy logic controller was studied Results indicated that, the selections of certain parameters or combinations of parameters, affect greatly the performance and stability of the fuzzy controller. Diagnostic guidelines used to tune or change certain factors or parameters to improve controller performance were developed based on knowledge gained from conventional control methods and knowledge gained from the experimental and the simulation results of this study.

  1. Probabilistic image modeling with an extended chain graph for human activity recognition and image segmentation.

    PubMed

    Zhang, Lei; Zeng, Zhi; Ji, Qiang

    2011-09-01

    Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.

  2. Specialized sperm function tests in varicocele and the future of andrology laboratory

    PubMed Central

    Majzoub, Ahmad; Esteves, Sandro C; Gosálvez, Jaime; Agarwal, Ashok

    2016-01-01

    Varicocele is a common medical condition entangled with many controversies. Though it is highly prevalent in men with infertility, still it marks its presence in males who do have normal fertility. Determining which patients are negatively affected by varicocele would enable clinicians to better select those men who benefitted the most from surgery. Since conventional semen analysis has been limited in its ability to evaluate the negative effects of varicocele on fertility, a multitude of specialized laboratory tests have emerged. In this review, we examine the role and significance of specialized sperm function tests with regards to varicocele. Among the various tests, analysis of sperm DNA fragmentation and measurements of oxidative stress markers provide an independent measure of fertility in men with varicocele. These diagnostic modalities have both diagnostic and prognostic information complementary to, but distinct from conventional sperm parameters. Test results can guide management and aid in monitoring intervention outcomes. Proteomics, metabolomics, and genomics are areas; though still developing, holding promise to revolutionize our understanding of reproductive physiology, including varicocele. PMID:26780873

  3. Use of solid phase extraction (SPE) to evaluate in vitro skin permeation of aescin.

    PubMed

    Montenegro, L; Carbone, C; Giannone, I; Puglisi, G

    2007-05-01

    The aim of this work was to evaluate the feasibility of assessing aescin in vitro permeation through human skin by determining the amount of aescin permeated using conventional HPLC procedures after extraction of skin permeation samples by means of solid phase extraction (SPE). Aescin in vitro skin permeation was assessed from aqueous solutions and gels using both Franz-type diffusion cells and flow-through diffusion cells. The SPE method used was highly accurate (mean accuracy 99.66%), highly reproducible (intra-day and inter-day variations lower than 2.3% and 2.2%, respectively) and aescin recovery from normal saline was greater than 99%. The use of Franz-type diffusion cells did not allow us to determine aescin flux values through excised human skin, therefore aescin skin permeation parameters could be calculated only using flow-through diffusion cells. Plotting the cumulative amount of aescin permeated as a function of time, linear relationships were obtained from both aqueous solution and gel using flow-through diffusion cells. Aescin flux values through excised human skin from aqueous gel were significantly lower than those observed from aqueous solution (p < 0.05). Calculating aescin percutaneous absorption parameters we evidenced that aescin partition coefficient was lower from the aqueous gel with respect to the aqueous solution. Therefore, the SPE method used in this study was suitable to determine aescin in vitro skin permeation parameters from aqueous solutions and gels using a conventional HPLC method for the analysis of the skin permeation samples.

  4. Influence of ocean tides on the diurnal and semidiurnal earth rotation variations from VLBI observations

    NASA Astrophysics Data System (ADS)

    Gubanov, V. S.; Kurdubov, S. L.

    2015-05-01

    The International astrogeodetic standard IERS Conventions (2010) contains a model of the diurnal and semidiurnal variations in Earth rotation parameters (ERPs), the pole coordinates and the Universal Time, arising from lunisolar tides in the world ocean. This model was constructed in the mid-1990s through a global analysis of Topex/Poseidon altimetry. The goal of this study is to try to estimate the parameters of this model by processing all the available VLBI observations on a global network of stations over the last 35 years performed within the framework of IVS (International VLBI Service) geodetic programs. The complexity of the problemlies in the fact that the sought-for corrections to the parameters of this model lie within 1 mm and, thus, are at the limit of their detectability by all currently available methods of ground-based positional measurements. This requires applying universal software packages with a high accuracy of reduction calculations and a well-developed system of controlling the simultaneous adjustment of observational data to analyze long series of VLBI observations. This study has been performed with the QUASAR software package developed at the Institute of Applied Astronomy of the Russian Academy of Sciences. Although the results obtained, on the whole, confirm a high accuracy of the basic model in the IERS Conventions (2010), statistically significant corrections that allow this model to be refined have been detected for some harmonics of the ERP variations.

  5. The impact of new partial AUC parameters for evaluating the bioequivalence of prolonged-release formulations.

    PubMed

    Boily, Michaël; Dussault, Catherine; Massicotte, Julie; Guibord, Pascal; Lefebvre, Marc

    2015-01-23

    To demonstrate bioequivalence (BE) between two prolonged-release (PR) drug formulations, single dose studies under fasting and fed state as well as at least one steady-state study are currently required by the European Medicines Agency (EMA). Recently, however, there have been debates regarding the relevance of steady-state studies. New requirements in single-dose investigations have also been suggested by the EMA to address the absence of a parameter that can adequately assess the equivalence of the shape of the curves. In the draft guideline issued in 2013, new partial area under the curve (pAUC) pharmacokinetic (PK) parameters were introduced to that effect. In light of these potential changes, there is a need of supportive clinical evidence to evaluate the impact of pAUCs on the evaluation of BE between PR formulations. In this retrospective analysis, it was investigated whether the newly defined parameters were associated with an increase in discriminatory ability or a change in variability compared to the conventional PK parameters. Among the single dose studies that met the requirements already in place, 20% were found unable to meet the EMA's new requirements in regards to the pAUC PK parameters. When pairing fasting and fed studies for a same formulation, the failure rate increased to 40%. In some cases, due to the high variability of these parameters, an increase of the sample size would be required to prove BE. In other cases however, the pAUC parameters demonstrated a robust ability to detect differences between the shapes of the curves of PR formulations. The present analysis should help to better understand the impact of the upcoming changes in European regulations on PR formulations and in the design of future BE studies. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Cost and cost effectiveness of long-lasting insecticide-treated bed nets - a model-based analysis

    PubMed Central

    2012-01-01

    Background The World Health Organization recommends that national malaria programmes universally distribute long-lasting insecticide-treated bed nets (LLINs). LLINs provide effective insecticide protection for at least three years while conventional nets must be retreated every 6-12 months. LLINs may also promise longer physical durability (lifespan), but at a higher unit price. No prospective data currently available is sufficient to calculate the comparative cost effectiveness of different net types. We thus constructed a model to explore the cost effectiveness of LLINs, asking how a longer lifespan affects the relative cost effectiveness of nets, and if, when and why LLINs might be preferred to conventional insecticide-treated nets. An innovation of our model is that we also considered the replenishment need i.e. loss of nets over time. Methods We modelled the choice of net over a 10-year period to facilitate the comparison of nets with different lifespan (and/or price) and replenishment need over time. Our base case represents a large-scale programme which achieves high coverage and usage throughout the population by distributing either LLINs or conventional nets through existing health services, and retreats a large proportion of conventional nets regularly at low cost. We identified the determinants of bed net programme cost effectiveness and parameter values for usage rate, delivery and retreatment cost from the literature. One-way sensitivity analysis was conducted to explicitly compare the differential effect of changing parameters such as price, lifespan, usage and replenishment need. Results If conventional and long-lasting bed nets have the same physical lifespan (3 years), LLINs are more cost effective unless they are priced at more than USD 1.5 above the price of conventional nets. Because a longer lifespan brings delivery cost savings, each one year increase in lifespan can be accompanied by a USD 1 or more increase in price without the cheaper net (of the same type) becoming more cost effective. Distributing replenishment nets each year in addition to the replacement of all nets every 3-4 years increases the number of under-5 deaths averted by 5-14% at a cost of USD 17-25 per additional person protected per annum or USD 1080-1610 per additional under-5 death averted. Conclusions Our results support the World Health Organization recommendation to distribute only LLINs, while giving guidance on the price thresholds above which this recommendation will no longer hold. Programme planners should be willing to pay a premium for nets which have a longer physical lifespan, and if planners are willing to pay USD 1600 per under-5 death averted, investing in replenishment is cost effective. PMID:22475679

  7. Analysis of heat transfer for unsteady MHD free convection flow of rotating Jeffrey nanofluid saturated in a porous medium

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Nor Athirah; Khan, Ilyas; Shafie, Sharidan; Alshomrani, Ali Saleh

    In this article, the influence of thermal radiation on unsteady magnetohydrodynamics (MHD) free convection flow of rotating Jeffrey nanofluid passing through a porous medium is studied. The silver nanoparticles (AgNPs) are dispersed in the Kerosene Oil (KO) which is chosen as conventional base fluid. Appropriate dimensionless variables are used and the system of equations is transformed into dimensionless form. The resulting problem is solved using the Laplace transform technique. The impact of pertinent parameters including volume fraction φ , material parameters of Jeffrey fluid λ1 , λ , rotation parameter r , Hartmann number Ha , permeability parameter K , Grashof number Gr , Prandtl number Pr , radiation parameter Rd and dimensionless time t on velocity and temperature profiles are presented graphically with comprehensive discussions. It is observed that, the rotation parameter, due to the Coriolis force, tends to decrease the primary velocity but reverse effect is observed in the secondary velocity. It is also observed that, the Lorentz force retards the fluid flow for both primary and secondary velocities. The expressions for skin friction and Nusselt number are also evaluated for different values of emerging parameters. A comparative study with the existing published work is provided in order to verify the present results. An excellent agreement is found.

  8. On standardization of low symmetry crystal fields

    NASA Astrophysics Data System (ADS)

    Gajek, Zbigniew

    2015-07-01

    Standardization methods of low symmetry - orthorhombic, monoclinic and triclinic - crystal fields are formulated and discussed. Two alternative approaches are presented, the conventional one, based on the second-rank parameters and the standardization based on the fourth-rank parameters. Mainly f-electron systems are considered but some guidelines for d-electron systems and the spin Hamiltonian describing the zero-field splitting are given. The discussion focuses on premises for choosing the most suitable method, in particular on inadequacy of the conventional one. Few examples from the literature illustrate this situation.

  9. Investigating the complexity of precipitation sets within California via the fractal-multifractal method

    NASA Astrophysics Data System (ADS)

    Puente, Carlos E.; Maskey, Mahesh L.; Sivakumar, Bellie

    2017-04-01

    A deterministic geometric approach, the fractal-multifractal (FM) method, is adapted in order to encode highly intermittent daily rainfall records observed over a year. Using such a notion, this research investigates the complexity of rainfall in various stations within the State of California. Specifically, records gathered at (from South to North) Cherry Valley, Merced, Sacramento and Shasta Dam, containing 59, 116, 115 and 72 years, all ending at water year 2015, were encoded and analyzed in detail. The analysis reveals that: (a) the FM approach yields faithful encodings of all records, by years, with mean square and maximum errors in accumulated rain that are less than a mere 2% and 10%, respectively; (b) the evolution of the corresponding "best" FM parameters, allowing visualization of the inter-annual rainfall dynamics from a reduced vantage point, exhibit implicit variability that precludes discriminating between sites and extrapolating to the future; (c) the evolution of the FM parameters, restricted to specific regions within space, allows finding sensible future simulations; and (d) the rain signals at all sites may be termed "equally complex," as usage of k-means clustering and conventional phase space analysis of FM parameters yields comparable results for all sites.

  10. A comprehensive method for GNSS data quality determination to improve ionospheric data analysis.

    PubMed

    Kim, Minchan; Seo, Jiwon; Lee, Jiyun

    2014-08-14

    Global Navigation Satellite Systems (GNSS) are now recognized as cost-effective tools for ionospheric studies by providing the global coverage through worldwide networks of GNSS stations. While GNSS networks continue to expand to improve the observability of the ionosphere, the amount of poor quality GNSS observation data is also increasing and the use of poor-quality GNSS data degrades the accuracy of ionospheric measurements. This paper develops a comprehensive method to determine the quality of GNSS observations for the purpose of ionospheric studies. The algorithms are designed especially to compute key GNSS data quality parameters which affect the quality of ionospheric product. The quality of data collected from the Continuously Operating Reference Stations (CORS) network in the conterminous United States (CONUS) is analyzed. The resulting quality varies widely, depending on each station and the data quality of individual stations persists for an extended time period. When compared to conventional methods, the quality parameters obtained from the proposed method have a stronger correlation with the quality of ionospheric data. The results suggest that a set of data quality parameters when used in combination can effectively select stations with high-quality GNSS data and improve the performance of ionospheric data analysis.

  11. A Comprehensive Method for GNSS Data Quality Determination to Improve Ionospheric Data Analysis

    PubMed Central

    Kim, Minchan; Seo, Jiwon; Lee, Jiyun

    2014-01-01

    Global Navigation Satellite Systems (GNSS) are now recognized as cost-effective tools for ionospheric studies by providing the global coverage through worldwide networks of GNSS stations. While GNSS networks continue to expand to improve the observability of the ionosphere, the amount of poor quality GNSS observation data is also increasing and the use of poor-quality GNSS data degrades the accuracy of ionospheric measurements. This paper develops a comprehensive method to determine the quality of GNSS observations for the purpose of ionospheric studies. The algorithms are designed especially to compute key GNSS data quality parameters which affect the quality of ionospheric product. The quality of data collected from the Continuously Operating Reference Stations (CORS) network in the conterminous United States (CONUS) is analyzed. The resulting quality varies widely, depending on each station and the data quality of individual stations persists for an extended time period. When compared to conventional methods, the quality parameters obtained from the proposed method have a stronger correlation with the quality of ionospheric data. The results suggest that a set of data quality parameters when used in combination can effectively select stations with high-quality GNSS data and improve the performance of ionospheric data analysis. PMID:25196005

  12. Computational Electrocardiography: Revisiting Holter ECG Monitoring.

    PubMed

    Deserno, Thomas M; Marx, Nikolaus

    2016-08-05

    Since 1942, when Goldberger introduced the 12-lead electrocardiography (ECG), this diagnostic method has not been changed. After 70 years of technologic developments, we revisit Holter ECG from recording to understanding. A fundamental change is fore-seen towards "computational ECG" (CECG), where continuous monitoring is producing big data volumes that are impossible to be inspected conventionally but require efficient computational methods. We draw parallels between CECG and computational biology, in particular with respect to computed tomography, computed radiology, and computed photography. From that, we identify technology and methodology needed for CECG. Real-time transfer of raw data into meaningful parameters that are tracked over time will allow prediction of serious events, such as sudden cardiac death. Evolved from Holter's technology, portable smartphones with Bluetooth-connected textile-embedded sensors will capture noisy raw data (recording), process meaningful parameters over time (analysis), and transfer them to cloud services for sharing (handling), predicting serious events, and alarming (understanding). To make this happen, the following fields need more research: i) signal processing, ii) cycle decomposition; iii) cycle normalization, iv) cycle modeling, v) clinical parameter computation, vi) physiological modeling, and vii) event prediction. We shall start immediately developing methodology for CECG analysis and understanding.

  13. Bayesian characterization of micro-perforated panels and multi-layer absorbers

    NASA Astrophysics Data System (ADS)

    Schmitt, Andrew Alexander Joseph

    First described by the late acoustician Dah-You Maa, micro-perforated panel (MPP) absorbers produce extremely high acoustic absorption coefficients. This is done without the use of conventional fibrous or porous materials that are often used in acoustic treatments, meaning MPP absorbers are capable of being implemented and withstanding critical situations where traditional absorbers do not suffice. The absorption function of a micro-perforated panel yields high yet relatively narrow results at certain frequencies, although wide-band absorption can be designed by stacking multiple MPP absorbers comprised of different characteristic parameters. Using Bayesian analysis, the physical properties of panel thickness, pore diameter, perforation ratio, and air depth are estimated inversely from experimental data of acoustic absorption, based on theoretical models for design of micro-perforated panels. Furthermore, this analysis helps to understand the interdependence and uncertainties of the parameters and how each affects the performance of the panel. Various micro-perforated panels are manufactured and tested in single- and double-layer absorber constructions.

  14. Multiobjective robust design of the double wishbone suspension system based on particle swarm optimization.

    PubMed

    Cheng, Xianfu; Lin, Yuqun

    2014-01-01

    The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.

  15. Power Law Versus Exponential Form of Slow Crack Growth of Advanced Structural Ceramics: Dynamic Fatigue

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.

    2002-01-01

    The life prediction analysis based on an exponential crack velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress-rate ("dynamic fatigue") and preload testing at ambient and elevated temperatures. The data fit to the strength versus In (stress rate) relation was found to be very reasonable for most of the materials. It was also found that preloading technique was equally applicable for the case of slow crack growth (SCG) parameter n > 30. The major limitation in the exponential crack velocity formulation, however, was that an inert strength of a material must be known priori to evaluate the important SCG parameter n, a significant drawback as compared to the conventional power-law crack velocity formulation.

  16. Polar decomposition of the Mueller matrix: a polarimetric rule of thumb for square-profile surface structure recognition.

    PubMed

    Sanz, J M; Saiz, J M; González, F; Moreno, F

    2011-07-20

    In this research, the polar decomposition (PD) method is applied to experimental Mueller matrices (MMs) measured on two-dimensional microstructured surfaces. Polarization information is expressed through a set of parameters of easier physical interpretation. It is shown that evaluating the first derivative of the retardation parameter, δ, a clear indication of the presence of defects either built on or dug in the scattering flat surface (a silicon wafer in our case) can be obtained. Although the rule of thumb thus obtained is established through PD, it can be easily implemented on conventional surface polarimetry. These results constitute an example of the capabilities of the PD approach to MM analysis, and show a direct application in surface characterization. © 2011 Optical Society of America

  17. Treatment of complicated urinary tract infection and acute pyelonephritis by short-course intravenous levofloxacin (750 mg/day) or conventional intravenous/oral levofloxacin (500 mg/day): prospective, open-label, randomized, controlled, multicenter, non-inferiority clinical trial.

    PubMed

    Ren, Hong; Li, Xiao; Ni, Zhao-Hui; Niu, Jian-Ying; Cao, Bin; Xu, Jie; Cheng, Hong; Tu, Xiao-Wen; Ren, Ai-Min; Hu, Ying; Xing, Chang-Ying; Liu, Ying-Hong; Li, Yan-Feng; Cen, Jun; Zhou, Rong; Xu, Xu-Dong; Qiu, Xiao-Hui; Chen, Nan

    2017-03-01

    To compare the efficacy and safety of short-course intravenous levofloxacin (LVFX) 750 mg with a conventional intravenous/oral regimen of LVFX 500 mg in patients from China with complicated urinary tract infections (cUTIs) and acute pyelonephritis (APN). This was a prospective, open-label, randomized, controlled, multicenter, non-inferiority clinical trial. Patients with cUTI and APN were randomly assigned to a short-course therapy group (intravenous LVFX at750 mg/day for 5 days) or a conventional therapy group (intravenous/oral regimen of LVFX at 500 mg/day for 7-14 days). The clinical, laboratory, and microbiological results were evaluated for efficacy and safety. The median dose of LVFX was 3555.4 mg in the short-course therapy group and 4874.2 mg in the conventional therapy group. Intention-to-treat analysis indicated the clinical effectiveness in the short-course therapy group (89.87%, 142/158) was non-inferior to that in the conventional therapy group (89.31%, 142/159). The microbiological effectiveness rates were also similar (short-course therapy: 89.55%, 60/67; conventional therapy: 86.30%, 63/73; p > 0.05). There were no significant differences in other parameters, including clinical and microbiological recurrence rates. The incidence of adverse effects and drug-related adverse effects were also similar for the short-course therapy group (21.95%, 36/164; 18.90%, 31/164) and the conventional therapy group (23.03%, 38/165; 15.76%, 26/165). Patients with cUTIs and APN who were given short-course LVFX therapy and conventional LVFX therapy had similar outcomes in clinical and microbiological efficacy, tolerance, and safety. The short-course therapy described here is a more convenient alternative to the conventional regimen with potential implication in anti-resistance and cost saving.

  18. Selective extra levator versus conventional abdomino perineal resection: experience from a tertiary-care center

    PubMed Central

    Pai, Vishwas D.; Engineer, Reena; Patil, Prachi S.; Arya, Supreeta; Desouza, Ashwin L.

    2016-01-01

    Background To compare extra levator abdomino perineal resection (ELAPER) with conventional abdominoperineal resection (APER) in terms of short-term oncological and clinical outcomes. Methods This is a retrospective review of a prospectively maintained database including all the patients of rectal cancer who underwent APER at Tata Memorial Center between July 1, 2013, and January 31, 2015. Short-term oncological parameters evaluated included circumferential resection margin involvement (CRM), tumor site perforation, and number of nodes harvested. Peri operative outcomes included blood loss, length of hospital stay, postoperative perineal wound complications, and 30-day mortality. The χ2-test was used to compare the results between the two groups. Results Forty-two cases of ELAPER and 78 cases of conventional APER were included in the study. Levator involvement was significantly higher in the ELAPER compared with the conventional group; otherwise, the two groups were comparable in all the aspects. CRM involvement was seen in seven patients (8.9%) in the conventional group compared with three patients (7.14%) in the ELAPER group. Median hospital stay was significantly longer with ELAPER. The univariate analysis of the factors influencing CRM positivity did not show any significance. Conclusions ELAPER should be the preferred approach for low rectal tumors with involvement of levators. For those cases in which levators are not involved, as shown in preoperative magnetic resonance imaging (MRI), the current evidence is insufficient to recommend ELAPER over conventional APER. This stresses the importance of preoperative MRI in determining the best approach for an individual patient. PMID:27284466

  19. Parametric study and performance analysis of hybrid rocket motors with double-tube configuration

    NASA Astrophysics Data System (ADS)

    Yu, Nanjia; Zhao, Bo; Lorente, Arnau Pons; Wang, Jue

    2017-03-01

    The practical implementation of hybrid rocket motors has historically been hampered by the slow regression rate of the solid fuel. In recent years, the research on advanced injector designs has achieved notable results in the enhancement of the regression rate and combustion efficiency of hybrid rockets. Following this path, this work studies a new configuration called double-tube characterized by injecting the gaseous oxidizer through a head end injector and an inner tube with injector holes distributed along the motor longitudinal axis. This design has demonstrated a significant potential for improving the performance of hybrid rockets by means of a better mixing of the species achieved through a customized injection of the oxidizer. Indeed, the CFD analysis of the double-tube configuration has revealed that this design may increase the regression rate over 50% with respect to the same motor with a conventional axial showerhead injector. However, in order to fully exploit the advantages of the double-tube concept, it is necessary to acquire a deeper understanding of the influence of the different design parameters in the overall performance. In this way, a parametric study is carried out taking into account the variation of the oxidizer mass flux rate, the ratio of oxidizer mass flow rate injected through the inner tube to the total oxidizer mass flow rate, and injection angle. The data for the analysis have been gathered from a large series of three-dimensional numerical simulations that considered the changes in the design parameters. The propellant combination adopted consists of gaseous oxygen as oxidizer and high-density polyethylene as solid fuel. Furthermore, the numerical model comprises Navier-Stokes equations, k-ε turbulence model, eddy-dissipation combustion model and solid-fuel pyrolysis, which is computed through user-defined functions. This numerical model was previously validated by analyzing the computational and experimental results obtained for conventional hybrid rocket designs. In addition, a performance analysis is conducted in order to evaluate the influence in the performance provoked by the possible growth of the diameter of the inner fuel grain holes during the motor operation. The latter phenomenon is known as burn through holes. Finally, after a statistical analysis of the data, a regression rate expression as a function of the design parameters is obtained.

  20. A comparison between conventional and LANDSAT based hydrologic modeling: The Four Mile Run case study

    NASA Technical Reports Server (NTRS)

    Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.

    1976-01-01

    Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.

  1. SWI or T2*: which MRI sequence to use in the detection of cerebral microbleeds? The Karolinska Imaging Dementia Study.

    PubMed

    Shams, S; Martola, J; Cavallin, L; Granberg, T; Shams, M; Aspelin, P; Wahlund, L O; Kristoffersen-Wiberg, M

    2015-06-01

    Cerebral microbleeds are thought to have potentially important clinical implications in dementia and stroke. However, the use of both T2* and SWI MR imaging sequences for microbleed detection has complicated the cross-comparison of study results. We aimed to determine the impact of microbleed sequences on microbleed detection and associated clinical parameters. Patients from our memory clinic (n = 246; 53% female; mean age, 62) prospectively underwent 3T MR imaging, with conventional thick-section T2*, thick-section SWI, and conventional thin-section SWI. Microbleeds were assessed separately on thick-section SWI, thin-section SWI, and T2* by 3 raters, with varying neuroradiologic experience. Clinical and radiologic parameters from the dementia investigation were analyzed in association with the number of microbleeds in negative binomial regression analyses. Prevalence and number of microbleeds were higher on thick-/thin-section SWI (20/21%) compared with T2*(17%). There was no difference in microbleed prevalence/number between thick- and thin-section SWI. Interrater agreement was excellent for all raters and sequences. Univariate comparisons of clinical parameters between patients with and without microbleeds yielded no difference across sequences. In the regression analysis, only minor differences in clinical associations with the number of microbleeds were noted across sequences. Due to the increased detection of microbleeds, we recommend SWI as the sequence of choice in microbleed detection. Microbleeds and their association with clinical parameters are robust to the effects of varying MR imaging sequences, suggesting that comparison of results across studies is possible, despite differing microbleed sequences. © 2015 by American Journal of Neuroradiology.

  2. Update on laser vision correction using wavefront analysis with the CustomCornea system and LADARVision 193-nm excimer laser

    NASA Astrophysics Data System (ADS)

    Maguen, Ezra I.; Salz, James J.; McDonald, Marguerite B.; Pettit, George H.; Papaioannou, Thanassis; Grundfest, Warren S.

    2002-06-01

    A study was undertaken to assess whether results of laser vision correction with the LADARVISION 193-nm excimer laser (Alcon-Autonomous technologies) can be improved with the use of wavefront analysis generated by a proprietary system including a Hartman-Schack sensor and expressed using Zernicke polynomials. A total of 82 eyes underwent LASIK in several centers with an improved algorithm, using the CustomCornea system. A subgroup of 48 eyes of 24 patients was randomized so that one eye undergoes conventional treatment and one eye undergoes treatment based on wavefront analysis. Treatment parameters were equal for each type of refractive error. 83% of all eyes had uncorrected vision of 20/20 or better and 95% were 20/25 or better. In all groups, uncorrected visual acuities did not improve significantly in eyes treated with wavefront analysis compared to conventional treatments. Higher order aberrations were consistently better corrected in eyes undergoing treatment based on wavefront analysis for LASIK at 6 months postop. In addition, the number of eyes with reduced RMS was significantly higher in the subset of eyes treated with a wavefront algorithm (38% vs. 5%). Wavefront technology may improve the outcomes of laser vision correction with the LADARVISION excimer laser. Further refinements of the technology and clinical trials will contribute to this goal.

  3. Performance assessment of conventional and base-isolated nuclear power plants for earthquake and blast loadings

    NASA Astrophysics Data System (ADS)

    Huang, Yin-Nan

    Nuclear power plants (NPPs) and spent nuclear fuel (SNF) are required by code and regulations to be designed for a family of extreme events, including very rare earthquake shaking, loss of coolant accidents, and tornado-borne missile impacts. Blast loading due to malevolent attack became a design consideration for NPPs and SNF after the terrorist attacks of September 11, 2001. The studies presented in this dissertation assess the performance of sample conventional and base isolated NPP reactor buildings subjected to seismic effects and blast loadings. The response of the sample reactor building to tornado-borne missile impacts and internal events (e.g., loss of coolant accidents) will not change if the building is base isolated and so these hazards were not considered. The sample NPP reactor building studied in this dissertation is composed of containment and internal structures with a total weight of approximately 75,000 tons. Four configurations of the reactor building are studied, including one conventional fixed-base reactor building and three base-isolated reactor buildings using Friction Pendulum(TM), lead rubber and low damping rubber bearings. The seismic assessment of the sample reactor building is performed using a new procedure proposed in this dissertation that builds on the methodology presented in the draft ATC-58 Guidelines and the widely used Zion method, which uses fragility curves defined in terms of ground-motion parameters for NPP seismic probabilistic risk assessment. The new procedure improves the Zion method by using fragility curves that are defined in terms of structural response parameters since damage and failure of NPP components are more closely tied to structural response parameters than to ground motion parameters. Alternate ground motion scaling methods are studied to help establish an optimal procedure for scaling ground motions for the purpose of seismic performance assessment. The proposed performance assessment procedure is used to evaluate the vulnerability of the conventional and base-isolated NPP reactor buildings. The seismic performance assessment confirms the utility of seismic isolation at reducing spectral demands on secondary systems. Procedures to reduce the construction cost of secondary systems in isolated reactor buildings are presented. A blast assessment of the sample reactor building is performed for an assumed threat of 2000 kg of TNT explosive detonated on the surface with a closest distance to the reactor building of 10 m. The air and ground shock waves produced by the design threat are generated and used for performance assessment. The air blast loading to the sample reactor building is computed using a Computational Fluid Dynamics code Air3D and the ground shock time series is generated using an attenuation model for soil/rock response. Response-history analysis of the sample conventional and base isolated reactor buildings to external blast loadings is performed using the hydrocode LS-DYNA. The spectral demands on the secondary systems in the isolated reactor building due to air blast loading are greater than those for the conventional reactor building but much smaller than those spectral demands associated with Safe Shutdown Earthquake shaking. The isolators are extremely effective at filtering out high acceleration, high frequency ground shock loading.

  4. Family history and risk of breast cancer: an analysis accounting for family structure.

    PubMed

    Brewer, Hannah R; Jones, Michael E; Schoemaker, Minouk J; Ashworth, Alan; Swerdlow, Anthony J

    2017-08-01

    Family history is an important risk factor for breast cancer incidence, but the parameters conventionally used to categorize it are based solely on numbers and/or ages of breast cancer cases in the family and take no account of the size and age-structure of the woman's family. Using data from the Generations Study, a cohort of over 113,000 women from the general UK population, we analyzed breast cancer risk in relation to first-degree family history using a family history score (FHS) that takes account of the expected number of family cases based on the family's age-structure and national cancer incidence rates. Breast cancer risk increased significantly (P trend  < 0.0001) with greater FHS. There was a 3.5-fold (95% CI 2.56-4.79) range of risk between the lowest and highest FHS groups, whereas women who had two or more relatives with breast cancer, the strongest conventional familial risk factor, had a 2.5-fold (95% CI 1.83-3.47) increase in risk. Using likelihood ratio tests, the best model for determining breast cancer risk due to family history was that combining FHS and age of relative at diagnosis. A family history score based on expected as well as observed breast cancers in a family can give greater risk discrimination on breast cancer incidence than conventional parameters based solely on cases in affected relatives. Our modeling suggests that a yet stronger predictor of risk might be a combination of this score and age at diagnosis in relatives.

  5. The enhanced value of combining conventional and 'omics' analyses in early assessment of drug-induced hepatobiliary injury

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellinger-Ziegelbauer, Heidrun, E-mail: heidrun.ellinger-ziegelbauer@bayerhealthcare.com; Adler, Melanie; Amberg, Alexander

    2011-04-15

    The InnoMed PredTox consortium was formed to evaluate whether conventional preclinical safety assessment can be significantly enhanced by incorporation of molecular profiling ('omics') technologies. In short-term toxicological studies in rats, transcriptomics, proteomics and metabolomics data were collected and analyzed in relation to routine clinical chemistry and histopathology. Four of the sixteen hepato- and/or nephrotoxicants given to rats for 1, 3, or 14 days at two dose levels induced similar histopathological effects. These were characterized by bile duct necrosis and hyperplasia and/or increased bilirubin and cholestasis, in addition to hepatocyte necrosis and regeneration, hepatocyte hypertrophy, and hepatic inflammation. Combined analysis ofmore » liver transcriptomics data from these studies revealed common gene expression changes which allowed the development of a potential sequence of events on a mechanistic level in accordance with classical endpoint observations. This included genes implicated in early stress responses, regenerative processes, inflammation with inflammatory cell immigration, fibrotic processes, and cholestasis encompassing deregulation of certain membrane transporters. Furthermore, a preliminary classification analysis using transcriptomics data suggested that prediction of cholestasis may be possible based on gene expression changes seen at earlier time-points. Targeted bile acid analysis, based on LC-MS metabonomics data demonstrating increased levels of conjugated or unconjugated bile acids in response to individual compounds, did not provide earlier detection of toxicity as compared to conventional parameters, but may allow distinction of different types of hepatobiliary toxicity. Overall, liver transcriptomics data delivered mechanistic and molecular details in addition to the classical endpoint observations which were further enhanced by targeted bile acid analysis using LC/MS metabonomics.« less

  6. Manufacturing of hybrid aluminum copper joints by electromagnetic pulse welding - Identification of quantitative process windows

    NASA Astrophysics Data System (ADS)

    Psyk, Verena; Scheffler, Christian; Linnemann, Maik; Landgrebe, Dirk

    2017-10-01

    Compared to conventional joining techniques, electromagnetic pulse welding offers important advantages especially when it comes to dissimilar material connections as e.g. copper aluminum welds. However, due to missing guidelines and tools for process design, the process has not been widely implemented in industrial production, yet. In order to contribute to overcoming this obstacle, a combined numerical and experimental process analysis for electromagnetic pulse welding of Cu-DHP and EN AW-1050 was carried out and the results were consolidated in a quantitative collision parameter based process window.

  7. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  8. Model Predictive Flight Control System with Full State Observer using H∞ Method

    NASA Astrophysics Data System (ADS)

    Sanwale, Jitu; Singh, Dhan Jeet

    2018-03-01

    This paper presents the application of the model predictive approach to design a flight control system (FCS) for longitudinal dynamics of a fixed wing aircraft. Longitudinal dynamics is derived for a conventional aircraft. Open loop aircraft response analysis is carried out. Simulation studies are illustrated to prove the efficacy of the proposed model predictive controller using H ∞ state observer. The estimation criterion used in the {H}_{∞} observer design is to minimize the worst possible effects of the modelling errors and additive noise on the parameter estimation.

  9. Flutter parametric studies of cantilevered twin-engine-transport type wing with and without winglet. Volume 1: Low-speed investigations

    NASA Technical Reports Server (NTRS)

    Bhatia, K. G.; Nagaraja, K. S.

    1984-01-01

    Flutter characteristics of a cantilevered high aspect ratio wing with winglet were investigated. The configuration represented a current technology, twin-engine airplane. A low-speed and high-speed model were used to evaluate compressibility effects through transonic Mach numbers and a wide range of mass-density ratios. Four flutter mechanisms were obtained in test, as well as analysis from various combinations of configuration parameters. The coupling between wing tip vertical and chordwise motions was shown to have significant effect under some conditions. It is concluded that for the flutter model configurations studied, the winglet related flutter was amenable to the conventional flutter analysis techniques.

  10. The limits of direct satellite tracking with the Global Positioning System (GPS)

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Yunck, T. P.

    1988-01-01

    Recent advances in high precision differential Global Positioning System-based satellite tracking can be applied to the more conventional direct tracking of low earth satellites. To properly evaluate the limiting accuracy of direct GPS-based tracking, it is necessary to account for the correlations between the a-priori errors in GPS states, Y-bias, and solar pressure parameters. These can be obtained by careful analysis of the GPS orbit determination process. The analysis indicates that sub-meter accuracy can be readily achieved for a user above 1000 km altitude, even when the user solution is obtained with data taken 12 hours after the data used in the GPS orbit solutions.

  11. Quality Control of Laser-Beam-Melted Parts by a Correlation Between Their Mechanical Properties and a Three-Dimensional Surface Analysis

    NASA Astrophysics Data System (ADS)

    Grimm, T.; Wiora, G.; Witt, G.

    2017-03-01

    Good correlations between three-dimensional surface analyses of laser-beam-melted parts of nickel alloy HX and their mechanical properties were found. The surface analyses were performed with a confocal microscope, which offers a more profound surface data basis than a conventional, two-dimensional tactile profilometry. This new approach results in a wide range of three-dimensional surface parameters, which were each evaluated with respect to their feasibility for quality control in additive manufacturing. As a result of an automated surface analysis process by the confocal microscope and an industrial six-axis robot, the results are an innovative approach for quality control in additive manufacturing.

  12. Whole-tumour diffusion kurtosis MR imaging histogram analysis of rectal adenocarcinoma: Correlation with clinical pathologic prognostic factors.

    PubMed

    Cui, Yanfen; Yang, Xiaotang; Du, Xiaosong; Zhuo, Zhizheng; Xin, Lei; Cheng, Xintao

    2018-04-01

    To investigate potential relationships between diffusion kurtosis imaging (DKI)-derived parameters using whole-tumour volume histogram analysis and clinicopathological prognostic factors in patients with rectal adenocarcinoma. 79 consecutive patients who underwent MRI examination with rectal adenocarcinoma were retrospectively evaluated. Parameters D, K and conventional ADC were measured using whole-tumour volume histogram analysis. Student's t-test or Mann-Whitney U-test, receiver operating characteristic curves and Spearman's correlation were used for statistical analysis. Almost all the percentile metrics of K were correlated positively with nodal involvement, higher histological grades, the presence of lymphangiovascular invasion (LVI) and circumferential margin (CRM) (p<0.05), with the exception of between K 10th , K 90th and histological grades. In contrast, significant negative correlations were observed between 25th, 50th percentiles and mean values of ADC and D, as well as ADC 10th , with tumour T stages (p< 0.05). Meanwhile, lower 75th and 90th percentiles of ADC and D values were also correlated inversely with nodal involvement (p< 0.05). K mean showed a relatively higher area under the curve (AUC) and higher specificity than other percentiles for differentiation of lesions with nodal involvement. DKI metrics with whole-tumour volume histogram analysis, especially K parameters, were associated with important prognostic factors of rectal cancer. • K correlated positively with some important prognostic factors of rectal cancer. • K mean showed higher AUC and specificity for differentiation of nodal involvement. • DKI metrics with whole-tumour volume histogram analysis depicted tumour heterogeneity.

  13. Community characteristics that attract physicians in Japan: a cross-sectional analysis of community demographic and economic factors.

    PubMed

    Matsumoto, Masatoshi; Inoue, Kazuo; Noguchi, Satomi; Toyokawa, Satoshi; Kajii, Eiji

    2009-02-18

    In many countries, there is a surplus of physicians in some communities and a shortage in others. Population size is known to be correlated with the number of physicians in a community, and is conventionally considered to represent the power of communities to attract physicians. However, associations between other demographic/economic variables and the number of physicians in a community have not been fully evaluated. This study seeks other parameters that correlate with the physician population and show which characteristics of a community determine its "attractiveness" to physicians. Associations between the number of physicians and selected demographic/economic/life-related variables of all of Japan's 3132 municipalities were examined. In order to exclude the confounding effect of community size, correlations between the physician-to-population ratio and other variable-to-population ratios or variable-to-area ratios were evaluated with simple correlation and multiple regression analyses. The equity of physician distribution against each variable was evaluated by the orenz curve and Gini index. Among the 21 variables selected, the service industry workers-to-population ratio (0.543), commercial land price (0.527), sales of goods per person (0.472), and daytime population density (0.451) were better correlated with the physician-to-population ratio than was population density (0.409). Multiple regression analysis showed that the service industry worker-to-population ratio, the daytime population density, and the elderly rate were each independently correlated with the physician-to-population ratio (standardized regression coefficient 0.393, 0.355, 0.089 respectively; each p<0.001). Equity of physician distribution was higher against service industry population (Gini index=0.26) and daytime population (0.28) than against population (0.33). Daytime population and service industry population in a municipality are better parameters of community attractiveness to physicians than population. Because attractiveness is supposed to consist of medical demand and the amenities of urban life, the two parameters may represent the amount of medical demand and/or the extent of urban amenities of the community more precisely than population does. The conventional demand-supply analysis based solely on population as the demand parameter may overestimate the inequity of the physician distribution among communities.

  14. Coupled structural, thermal, phase-change and electromagnetic analysis for superconductors, volume 2

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Farhat, Charbel; Park, K. C.; Militello, Carmelo; Schuler, James J.

    1993-01-01

    Two families of parametrized mixed variational principles for linear electromagnetodynamics are constructed. The first family is applicable when the current density distribution is known a priori. Its six independent fields are magnetic intensity and flux density, magnetic potential, electric intensity and flux density and electric potential. Through appropriate specialization of parameters the first principle reduces to more conventional principles proposed in the literature. The second family is appropriate when the current density distribution and a conjugate Lagrange multiplier field are adjoined, giving a total of eight independently varied fields. In this case it is shown that a conventional variational principle exists only in the time-independent (static) case. Several static functionals with reduced number of varied fields are presented. The application of one of these principles to construct finite elements with current prediction capabilities is illustrated with a numerical example.

  15. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  16. Ultrasonic dyeing of cellulose nanofibers.

    PubMed

    Khatri, Muzamil; Ahmed, Farooq; Jatoi, Abdul Wahab; Mahar, Rasool Bux; Khatri, Zeeshan; Kim, Ick Soo

    2016-07-01

    Textile dyeing assisted by ultrasonic energy has attained a greater interest in recent years. We report ultrasonic dyeing of nanofibers for the very first time. We chose cellulose nanofibers and dyed with two reactive dyes, CI reactive black 5 and CI reactive red 195. The cellulose nanofibers were prepared by electrospinning of cellulose acetate (CA) followed by deacetylation. The FTIR results confirmed complete conversion of CA into cellulose nanofibers. Dyeing parameters optimized were dyeing temperature, dyeing time and dye concentrations for each class of the dye used. Results revealed that the ultrasonic dyeing produced higher color yield (K/S values) than the conventional dyeing. The color fastness test results depicted good dye fixation. SEM analysis evidenced that ultrasonic energy during dyeing do not affect surface morphology of nanofibers. The results conclude successful dyeing of cellulose nanofibers using ultrasonic energy with better color yield and color fastness results than conventional dyeing. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Simulation and analysis of plasmonic sensor in NIR with fluoride glass and graphene layer

    NASA Astrophysics Data System (ADS)

    Pandey, Ankit Kumar; Sharma, Anuj K.

    2018-02-01

    A calcium fluoride (CaF2) prism based plasmonic biosensor with graphene layer is proposed in near infrared region (NIR) of operation. The stacking of multilayer graphene is considered with dielectric interlayer sandwiched between two graphene layers. Excellent optical properties of CaF2 glass and enhanced field at the graphene-analyte interface are intended to be exploited for proposed sensor structure in NIR spectral region. Performance parameters in terms of field enhancement at interface and figure of merit (FOM) are analyzed and compared with those of conventional SPR based sensor. It is demonstrated that the same sensor probe can also be used for gas sensing with nearly 3.5-4 times enhancement in FOM, compared with conventional sensor. The results show that CaF2 based SPR sensor provides much better sensitivity than that based on other glasses.

  18. Security analysis of quadratic phase based cryptography

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Healy, John J.; Sheridan, John T.

    2016-09-01

    The linear canonical transform (LCT) is essential in modeling a coherent light field propagation through first-order optical systems. Recently, a generic optical system, known as a Quadratic Phase Encoding System (QPES), for encrypting a two-dimensional (2D) image has been reported. It has been reported together with two phase keys the individual LCT parameters serve as keys of the cryptosystem. However, it is important that such the encryption systems also satisfies some dynamic security properties. Therefore, in this work, we examine some cryptographic evaluation methods, such as Avalanche Criterion and Bit Independence, which indicates the degree of security of the cryptographic algorithms on QPES. We compare our simulation results with the conventional Fourier and the Fresnel transform based DRPE systems. The results show that the LCT based DRPE has an excellent avalanche and bit independence characteristics than that of using the conventional Fourier and Fresnel based encryption systems.

  19. Model-based spectral estimation of Doppler signals using parallel genetic algorithms.

    PubMed

    Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F

    2000-05-01

    Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.

  20. Design Space Toolbox V2: Automated Software Enabling a Novel Phenotype-Centric Modeling Strategy for Natural and Synthetic Biological Systems

    PubMed Central

    Lomnitz, Jason G.; Savageau, Michael A.

    2016-01-01

    Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346

  1. Parameter constraints from weak-lensing tomography of galaxy shapes and cosmic microwave background fluctuations

    NASA Astrophysics Data System (ADS)

    Merkel, Philipp M.; Schäfer, Björn Malte

    2017-08-01

    Recently, it has been shown that cross-correlating cosmic microwave background (CMB) lensing and three-dimensional (3D) cosmic shear allows to considerably tighten cosmological parameter constraints. We investigate whether similar improvement can be achieved in a conventional tomographic setup. We present Fisher parameter forecasts for a Euclid-like galaxy survey in combination with different ongoing and forthcoming CMB experiments. In contrast to a fully 3D analysis, we find only marginal improvement. Assuming Planck-like CMB data, we show that including the full covariance of the combined CMB and cosmic shear data improves the dark energy figure of merit (FOM) by only 3 per cent. The marginalized error on the sum of neutrino masses is reduced at the same level. For a next generation CMB satellite mission such as Prism, the predicted improvement of the dark energy FOM amounts to approximately 25 per cent. Furthermore, we show that the small improvement is contrasted by an increased bias in the dark energy parameters when the intrinsic alignment of galaxies is not correctly accounted for in the full covariance matrix.

  2. Degree of Ice Particle Surface Roughness Inferred from Polarimetric Observations

    NASA Technical Reports Server (NTRS)

    Hioki, Souichiro; Yang, Ping; Baum, Bryan A.; Platnick, Steven; Meyer, Kerry G.; King, Michael D.; Riedi, Jerome

    2016-01-01

    The degree of surface roughness of ice particles within thick, cold ice clouds is inferred from multidirectional, multi-spectral satellite polarimetric observations over oceans, assuming a column-aggregate particle habit. An improved roughness inference scheme is employed that provides a more noise-resilient roughness estimate than the conventional best-fit approach. The improvements include the introduction of a quantitative roughness parameter based on empirical orthogonal function analysis and proper treatment of polarization due to atmospheric scattering above clouds. A global 1-month data sample supports the use of a severely roughened ice habit to simulate the polarized reflectivity associated with ice clouds over ocean. The density distribution of the roughness parameter inferred from the global 1- month data sample and further analyses of a few case studies demonstrate the significant variability of ice cloud single-scattering properties. However, the present theoretical results do not agree with observations in the tropics. In the extra-tropics, the roughness parameter is inferred but 74% of the sample is out of the expected parameter range. Potential improvements are discussed to enhance the depiction of the natural variability on a global scale.

  3. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  4. Focal-Plane Alignment Sensing

    DTIC Science & Technology

    1993-02-01

    amplification induced by the inverse filter. The problem of noise amplification that arises in conventional image deblurring problems has often been... noise sensitivity, and strategies for selecting a regularization parameter have been developed. The probability of convergence to within a prescribed...Strategies in Image Deblurring .................. 12 2.2.2 CLS Parameter Selection ........................... 14 2.2.3 Wiener Parameter Selection

  5. Do virtual reality games improve mobility skills and balance measurements in community-dwelling older adults? Systematic review and meta-analysis.

    PubMed

    Neri, Silvia Gr; Cardoso, Jefferson R; Cruz, Lorena; Lima, Ricardo M; de Oliveira, Ricardo J; Iversen, Maura D; Carregaro, Rodrigo L

    2017-10-01

    To summarize evidence on the effectiveness of virtual reality games and conventional therapy or no-intervention for fall prevention in the elderly. An electronic data search (last searched December 2016) was performed on 10 databases (Web of Science, EMBASE, PUBMED, CINAHL, LILACS, SPORTDiscus, Cochrane Library, Scopus, SciELO, PEDro) and retained only randomized controlled trials. Sample characteristics and intervention parameters were compared, focusing on clinical homogeneity of demographic characteristics, type/duration of interventions, outcomes (balance, reaction time, mobility, lower limb strength and fear of falling) and low risk of bias. Based on homogeneity, a meta-analysis was considered. Two independent reviewers assessed the risk of bias. A total of 28 studies met the inclusion criteria and were appraised ( n: 1121 elderly participants). We found that virtual reality games presented positive effects on balance and fear of falling compared with no-intervention. Virtual reality games were also superior to conventional interventions for balance improvements and fear of falling. The six studies included in the meta-analysis demonstrated that virtual reality games significantly improved mobility and balance after 3-6 and 8-12 weeks of intervention when compared with no-intervention. The risk of bias revealed that less than one-third of the studies correctly described the random sequence generation and allocation concealment procedures. Our review suggests positive clinical effects of virtual reality games for balance and mobility improvements compared with no-treatment and conventional interventions. However, owing to the high risk of bias and large variability of intervention protocols, the evidence remains inconclusive and further research is warranted.

  6. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    NASA Astrophysics Data System (ADS)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.

  7. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  8. Mathematical modeling and fuzzy availability analysis for serial processes in the crystallization system of a sugar plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram

    2017-03-01

    The binary states, i.e., success or failed state assumptions used in conventional reliability are inappropriate for reliability analysis of complex industrial systems due to lack of sufficient probabilistic information. For large complex systems, the uncertainty of each individual parameter enhances the uncertainty of the system reliability. In this paper, the concept of fuzzy reliability has been used for reliability analysis of the system, and the effect of coverage factor, failure and repair rates of subsystems on fuzzy availability for fault-tolerant crystallization system of sugar plant is analyzed. Mathematical modeling of the system is carried out using the mnemonic rule to derive Chapman-Kolmogorov differential equations. These governing differential equations are solved with Runge-Kutta fourth-order method.

  9. Using spin-label W-band EPR to study membrane fluidity profiles in samples of small volume

    NASA Astrophysics Data System (ADS)

    Mainali, Laxman; Hyde, James S.; Subczynski, Witold K.

    2013-01-01

    Conventional and saturation-recovery (SR) EPR at W-band (94 GHz) using phosphatidylcholine spin labels (labeled at the alkyl chain [n-PC] and headgroup [T-PC]) to obtain profiles of membrane fluidity has been demonstrated. Dimyristoylphosphatidylcholine (DMPC) membranes with and without 50 mol% cholesterol have been studied, and the results have been compared with similar studies at X-band (9.4 GHz) (L. Mainali, J.B. Feix, J.S. Hyde, W.K. Subczynski, J. Magn. Reson. 212 (2011) 418-425). Profiles of the spin-lattice relaxation rate (T1-1) obtained from SR EPR measurements for n-PCs and T-PC were used as a convenient quantitative measure of membrane fluidity. Additionally, spectral analysis using Freed's MOMD (microscopic-order macroscopic-disorder) model (E. Meirovitch, J.H. Freed J. Phys. Chem. 88 (1984) 4995-5004) provided rotational diffusion coefficients (R⊥ and R||) and order parameters (S0). Spectral analysis at X-band provided one rotational diffusion coefficient, R⊥. T1-1, R⊥, and R|| profiles reflect local membrane dynamics of the lipid alkyl chain, while the order parameter shows only the amplitude of the wobbling motion of the lipid alkyl chain. Using these dynamic parameters, namely T1-1, R⊥, and R||, one can discriminate the different effects of cholesterol at different depths, showing that cholesterol has a rigidifying effect on alkyl chains to the depth occupied by the rigid steroid ring structure and a fluidizing effect at deeper locations. The nondynamic parameter, S0, shows that cholesterol has an ordering effect on alkyl chains at all depths. Conventional and SR EPR measurements with T-PC indicate that cholesterol has a fluidizing effect on phospholipid headgroups. EPR at W-band provides more detailed information about the depth-dependent dynamic organization of the membrane compared with information obtained at X-band. EPR at W-band has the potential to be a powerful tool for studying membrane fluidity in samples of small volume, ˜30 nL, compared with a representative sample volume of ˜3 μL at X-band.

  10. The Conventional and Unconventional about Disability Conventions: A Reflective Analysis of United Nations Convention on the Rights of Persons with Disabilities

    ERIC Educational Resources Information Center

    Umeasiegbu, Veronica I.; Bishop, Malachy; Mpofu, Elias

    2013-01-01

    This article presents an analysis of the United Nations Convention on the Rights of Persons with Disabilities (CRPD) in relation to prior United Nations conventions on disability and U.S. disability policy law with a view to identifying the conventional and also the incremental advances of the CRPD. Previous United Nations conventions related to…

  11. Perceptive rehabilitation and trunk posture alignment in patients with Parkinson disease: a single blind randomized controlled trial.

    PubMed

    Morrone, Michelangelo; Miccinilli, Sandra; Bravi, Marco; Paolucci, Teresa; Melgari, Jean M; Salomone, Gaetano; Picelli, Alessandro; Spadini, Ennio; Ranavolo, Alberto; Saraceni, Vincenzo M; DI Lazzaro, Vincenzo; Sterzi, Silvia

    2016-12-01

    Recent studies aimed to evaluate the potential effects of perceptive rehabilitation in Parkinson Disease reporting promising preliminary results for postural balance and pain symptoms. To date, no randomized controlled trial was carried out to compare the effects of perceptive rehabilitation and conventional treatment in patients with Parkinson Disease. To evaluate whether a perceptive rehabilitation treatment could be more effective than a conventional physical therapy program in improving postural control and gait pattern in patients with Parkinson Disease. Single blind, randomized controlled trial. Department of Physical and Rehabilitation Medicine of a University Hospital. Twenty outpatients affected by idiopathic Parkinson Disease at Hoehn and Yahr stage ≤3. Recruited patients were divided into two groups: the first one underwent individual treatment with Surfaces for Perceptive Rehabilitation (Su-Per), consisting of rigid wood surfaces supporting deformable latex cones of various dimensions, and the second one received conventional group physical therapy treatment. Each patient underwent a training program consisting of ten, 45-minute sessions, three days a week for 4 consecutive weeks. Each subject was evaluated before treatment, immediately after treatment and at one month of follow-up, by an optoelectronic stereophotogrammetric system for gait and posture analysis, and by a computerized platform for stabilometric assessment. Kyphosis angle decreased after ten sessions of perceptive rehabilitation, thus showing a substantial difference with respect to the control group. No significant differences were found as for gait parameters (cadence, gait speed and stride length) within Su-Per group and between groups. Parameters of static and dynamic evaluation on stabilometric platform failed to demonstrate any statistically relevant difference both within-groups and between-groups. Perceptive training may help patients affected by Parkinson Disease into restoring a correct midline perception and, in turn, to improve postural control. Perceptive surfaces represent an alternative to conventional rehabilitation of postural disorders in Parkinson Disease. Further studies are needed to determine if the association of perceptive treatment and active motor training would be useful in improving also gait dexterity.

  12. LETTER TO THE EDITOR: On the relations between the zero-field splitting parameters in the extended Stevens operator notation and the conventional ones used in EMR for orthorhombic and lower symmetry

    NASA Astrophysics Data System (ADS)

    Rudowicz, C.

    2000-06-01

    Electron magnetic resonance (EMR) studies of paramagnetic species with the spin S ≥ 1 at orthorhombic symmetry sites require an axial zero-field splitting (ZFS) parameter and a rhombic one of the second order (k = 2), whereas at triclinic sites all five ZFS (k = 2) parameters are expressed in the crystallographic axis system. For the spin S ≥ 2 also the higher-order ZFS terms must be considered. In the principal axis system, instead of the five ZFS (k = 2) parameters, the two principal ZFS values can be used, as for orthorhombic symmetry; however, then the orientation of the principal axes with respect to the crystallographic axis system must be provided. Recently three serious cases of incorrect relations between the extended Stevens ZFS parameters and the conventional ones have been identified in the literature. The first case concerns a controversy concerning the second-order rhombic ZFS parameters and was found to have lead to misinterpretation, in a review article, of several values of either E or b22 published earlier. The second case concerns the set of five relations between the extended Stevens ZFS parameters bkq and the conventional ones Dij for triclinic symmetry, four of which turn out to be incorrect. The third case concerns the omission of the scaling factors fk for the extended Stevens ZFS parameters bkq. In all cases the incorrect relations in question have been published in spite of the earlier existence of the correct relations in the literature. The incorrect relations are likely to lead to further misinterpretation of the published values of the ZFS parameters for orthorhombic and lower symmetry. The purpose of this paper is to make the spectroscopists working in the area of EMR (including EPR and ESR) and related spectroscopies aware of the problem and to reduce proliferation of the incorrect relations.

  13. Analysis of temperature in conventional and ultrasonically-assisted drilling of cortical bone with infrared thermography.

    PubMed

    Alam, K; Silberschmidt, Vadim V

    2014-01-01

    Bone drilling is widely used in orthopaedics, dental and neurosurgeries for repair and fixation purposes. One of the major concerns in drilling of bone is thermal necrosis that may seriously affect healing at interfaces with fixtures and implants. Ultrasonically-assisted drilling (UAD) is recently introduced as alternative to conventional drilling (CD) to minimize invasiveness of the procedure. This paper studies temperature rise in bovine cortical bone drilled with CD and UAD techniques and their comparison using infrared thermography. A parametric investigation was carried out to evaluate effects of drilling conditions (drilling speed and feed rate) and parameters of ultrasonic vibration (frequency and amplitude) on the temperature elevation in bone. Higher levels of the drilling speed and feed rate were found responsible for generating temperatures above a thermal threshold level in both types of drilling. UAD with frequency below 20 kHz resulted in lower temperature compared to CD with the same drilling parameters. The temperatures generated in cases with vibration frequency exceeding 20 kHz were significantly higher than those in CD for the range of drilling speeds and feed rates. The amplitude of vibration was found to have no significant effect on bone temperature. UAD may be investigated further to explore its benefits over the existing CD techniques.

  14. Clinical Factors Associated with Sperm DNA Fragmentation in Male Patients with Infertility

    PubMed Central

    Komiya, Akira; Kato, Tomonori; Kawauchi, Yoko; Watanabe, Akihiko; Fuse, Hideki

    2014-01-01

    Objective. The clinical factors associated with sperm DNA fragmentation (SDF) were investigated in male patients with infertility. Materials and Methods. Fifty-four ejaculates from infertile Japanese males were used. Thirty-three and twenty-one were from the patients with varicoceles and idiopathic causes of infertility, respectively. We performed blood tests, including the serum sex hormone levels, and conventional and computer-assisted semen analyses. The sperm nuclear vacuolization (SNV) was evaluated using a high-magnification microscope. The SDF was evaluated using the sperm chromatin dispersion test (SCDt) to determine the SDF index (SDFI). The SDFI was compared with semen parameters and other clinical variables, including lifestyle factors. Results. The SDFI was 41.3 ± 22.2% (mean ± standard deviation) and did not depend on the cause of infertility. Chronic alcohol use increased the SDFI to 49.6 ± 23.3% compared with 33.9 ± 18.0% in nondrinkers. The SDFI was related to adverse conventional semen parameters and sperm motion characteristics and correlated with the serum FSH level. The SNV showed a tendency to increase with the SDFI. The multivariate analysis revealed that the sperm progressive motility and chronic alcohol use were significant predictors of the SDF. Conclusion. The SCDt should be offered to chronic alcohol users and those with decreased sperm progressive motility. PMID:25165747

  15. Modified neural networks for rapid recovery of tokamak plasma parameters for real time control

    NASA Astrophysics Data System (ADS)

    Sengupta, A.; Ranjan, P.

    2002-07-01

    Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.

  16. The case for regime-based water quality standards

    Treesearch

    G.C. Poole; J.B. Dunham; D.M. Keenan; S.T. Sauter; D.A. McCullough; C. Mebane; J.C. Lockwood; D.A. Essig; M.P. Hicks; D.J. Sturdevant; E.J. Materna; S.A. Spalding; J. Risley; M. Deppman

    2004-01-01

    Conventional water quality standards have been successful in reducing the concentration of toxic substances in US waters. However, conventional standards are based on simple thresholds and are therefore poorly structured to address human-caused imbalances in dynamic, natural water quality parameters, such as nutrients, sediment, and temperature. A more applicable type...

  17. A Systematic Approach of Employing Quality by Design Principles: Risk Assessment and Design of Experiments to Demonstrate Process Understanding and Identify the Critical Process Parameters for Coating of the Ethylcellulose Pseudolatex Dispersion Using Non-Conventional Fluid Bed Process.

    PubMed

    Kothari, Bhaveshkumar H; Fahmy, Raafat; Claycamp, H Gregg; Moore, Christine M V; Chatterjee, Sharmista; Hoag, Stephen W

    2017-05-01

    The goal of this study was to utilize risk assessment techniques and statistical design of experiments (DoE) to gain process understanding and to identify critical process parameters for the manufacture of controlled release multiparticulate beads using a novel disk-jet fluid bed technology. The material attributes and process parameters were systematically assessed using the Ishikawa fish bone diagram and failure mode and effect analysis (FMEA) risk assessment methods. The high risk attributes identified by the FMEA analysis were further explored using resolution V fractional factorial design. To gain an understanding of the processing parameters, a resolution V fractional factorial study was conducted. Using knowledge gained from the resolution V study, a resolution IV fractional factorial study was conducted; the purpose of this IV study was to identify the critical process parameters (CPP) that impact the critical quality attributes and understand the influence of these parameters on film formation. For both studies, the microclimate, atomization pressure, inlet air volume, product temperature (during spraying and curing), curing time, and percent solids in the coating solutions were studied. The responses evaluated were percent agglomeration, percent fines, percent yield, bead aspect ratio, median particle size diameter (d50), assay, and drug release rate. Pyrobuttons® were used to record real-time temperature and humidity changes in the fluid bed. The risk assessment methods and process analytical tools helped to understand the novel disk-jet technology and to systematically develop models of the coating process parameters like process efficiency and the extent of curing during the coating process.

  18. Conditions for l =1 Pomeranchuk instability in a Fermi liquid

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Ming; Klein, Avraham; Chubukov, Andrey V.

    2018-04-01

    We perform a microscopic analysis of how the constraints imposed by conservation laws affect q =0 Pomeranchuk instabilities in a Fermi liquid. The conventional view is that these instabilities are determined by the static interaction between low-energy quasiparticles near the Fermi surface, in the limit of vanishing momentum transfer q . The condition for a Pomeranchuk instability is set by Flc (s )=-1 , where Flc (s ) (a Landau parameter) is a properly normalized partial component of the antisymmetrized static interaction F (k ,k +q ;p ,p -q ) in a charge (c) or spin (s) subchannel with angular momentum l . However, it is known that conservation laws for total spin and charge prevent Pomeranchuk instabilities for l =1 spin- and charge-current order parameters. Our study aims to understand whether this holds only for these special forms of l =1 order parameters or is a more generic result. To this end we perform a diagrammatic analysis of spin and charge susceptibilities for charge and spin density order parameters, as well as perturbative calculations to second order in the Hubbard U . We argue that for l =1 spin-current and charge-current order parameters, certain vertex functions, which are determined by high-energy fermions, vanish at Fl=1 c (s )=-1 , preventing a Pomeranchuk instability from taking place. For an order parameter with a generic l =1 form factor, the vertex function is not expressed in terms of Fl=1 c (s ), and a Pomeranchuk instability may occur when F1c (s )=-1 . We argue that for other values of l , a Pomeranchuk instability may occur at Flc (s )=-1 for an order parameter with any form factor.

  19. Pitch features of environmental sounds

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Kang, Jian

    2016-07-01

    A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.

  20. Analysis and Design of High-Order Parallel Resonant Converters

    NASA Astrophysics Data System (ADS)

    Batarseh, Issa Eid

    1990-01-01

    In this thesis, a special state variable transformation technique has been derived for the analysis of high order dc-to-dc resonant converters. Converters comprised of high order resonant tanks have the advantage of utilizing the parasitic elements by making them part of the resonant tank. A new set of state variables is defined in order to make use of two-dimensional state-plane diagrams in the analysis of high order converters. Such a method has been successfully used for the analysis of the conventional Parallel Resonant Converters (PRC). Consequently, two -dimensional state-plane diagrams are used to analyze the steady state response for third and fourth order PRC's when these converters are operated in the continuous conduction mode. Based on this analysis, a set of control characteristic curves for the LCC-, LLC- and LLCC-type PRC are presented from which various converter design parameters are obtained. Various design curves for component value selections and device ratings are given. This analysis of high order resonant converters shows that the addition of the reactive components to the resonant tank results in converters with better performance characteristics when compared with the conventional second order PRC. Complete design procedure along with design examples for 2nd, 3rd and 4th order converters are presented. Practical power supply units, normally used for computer applications, were built and tested by using the LCC-, LLC- and LLCC-type commutation schemes. In addition, computer simulation results are presented for these converters in order to verify the theoretical results.

  1. Prediction of phospholipidosis-inducing potential of drugs by in vitro biochemical and physicochemical assays followed by multivariate analysis.

    PubMed

    Kuroda, Yukihiro; Saito, Madoka

    2010-03-01

    An in vitro method to predict phospholipidosis-inducing potential of cationic amphiphilic drugs (CADs) was developed using biochemical and physicochemical assays. The following parameters were applied to principal component analysis, as well as physicochemical parameters: pK(a) and clogP; dissociation constant of CADs from phospholipid, inhibition of enzymatic phospholipid degradation, and metabolic stability of CADs. In the score plot, phospholipidosis-inducing drugs (amiodarone, propranolol, imipramine, chloroquine) were plotted locally forming the subspace for positive CADs; while non-inducing drugs (chlorpromazine, chloramphenicol, disopyramide, lidocaine) were placed scattering out of the subspace, allowing a clear discrimination between both classes of CADs. CADs that often produce false results by conventional physicochemical or cell-based assay methods were accurately determined by our method. Basic and lipophilic disopyramide could be accurately predicted as a nonphospholipidogenic drug. Moreover, chlorpromazine, which is often falsely predicted as a phospholipidosis-inducing drug by in vitro methods, could be accurately determined. Because this method uses the pharmacokinetic parameters pK(a), clogP, and metabolic stability, which are usually obtained in the early stages of drug development, the method newly requires only the two parameters, binding to phospholipid, and inhibition of lipid degradation enzyme. Therefore, this method provides a cost-effective approach to predict phospholipidosis-inducing potential of a drug. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  2. High-Frequency Oscillatory Ventilation Use and Severe Pediatric ARDS in the Pediatric Hematopoietic Cell Transplant Recipient.

    PubMed

    Rowan, Courtney M; Loomis, Ashley; McArthur, Jennifer; Smith, Lincoln S; Gertz, Shira J; Fitzgerald, Julie C; Nitu, Mara E; Moser, Elizabeth As; Hsing, Deyin D; Duncan, Christine N; Mahadeo, Kris M; Moffet, Jerelyn; Hall, Mark W; Pinos, Emily L; Tamburro, Robert F; Cheifetz, Ira M

    2018-04-01

    The effectiveness of high-frequency oscillatory ventilation (HFOV) in the pediatric hematopoietic cell transplant patient has not been established. We sought to identify current practice patterns of HFOV, investigate parameters during HFOV and their association with mortality, and compare the use of HFOV to conventional mechanical ventilation in severe pediatric ARDS. This is a retrospective analysis of a multi-center database of pediatric and young adult allogeneic hematopoietic cell transplant subjects requiring invasive mechanical ventilation for critical illness from 2009 through 2014. Twelve United States pediatric centers contributed data. Continuous variables were compared using a Wilcoxon rank-sum test or a Kruskal-Wallis analysis. For categorical variables, univariate analysis with logistic regression was performed. The database contains 222 patients, of which 85 subjects were managed with HFOV. Of this HFOV cohort, the overall pediatric ICU survival was 23.5% ( n = 20). HFOV survivors were transitioned to HFOV at a lower oxygenation index than nonsurvivors (25.6, interquartile range 21.1-36.8, vs 37.2, interquartile range 26.5-52.2, P = .046). Survivors were transitioned to HFOV earlier in the course of mechanical ventilation, (day 0 vs day 2, P = .002). No subject survived who was transitioned to HFOV after 1 week of invasive mechanical ventilation. We compared subjects with severe pediatric ARDS treated only with conventional mechanical ventilation versus early HFOV (within 2 d of invasive mechanical ventilation) versus late HFOV. There was a trend toward difference in survival (conventional mechanical ventilation 24%, early HFOV 30%, and late HFOV 9%, P = .08). In this large database of pediatric allogeneic hematopoietic cell transplant subjects who had acute respiratory failure requiring invasive mechanical ventilation for critical illness with severe pediatric ARDS, early use of HFOV was associated with improved survival compared to late implementation of HFOV, and the subjects had outcomes similar to those treated only with conventional mechanical ventilation. Copyright © 2018 by Daedalus Enterprises.

  3. Impact of assimilation of conventional and satellite meteorological observations on the numerical simulation of a Bay of Bengal Tropical Cyclone of November 2008 near Tamilnadu using WRF model

    NASA Astrophysics Data System (ADS)

    Srinivas, C. V.; Yesubabu, V.; Venkatesan, R.; Ramarkrishna, S. S. V. S.

    2010-12-01

    The objective of this study is to examine the impact of assimilation of conventional and satellite data on the prediction of a severe cyclonic storm that formed in the Bay of Bengal during November 2008 with the four-dimensional data assimilation (FDDA) technique. The Weather Research and Forecasting (WRF ARW) model was used to study the structure, evolution, and intensification of the storm. Five sets of numerical simulations were performed using the WRF. In the first one, called Control run, the National Centers for Environmental Prediction (NCEP) Final Analysis (FNL) was used for the initial and boundary conditions. In the remaining experiments available observations were used to obtain an improved analysis and FDDA grid nudging was performed for a pre-forecast period of 24 h. The second simulation (FDDAALL) was performed with all the data of the Quick Scatterometer (QSCAT), Special Sensor Microwave Imager (SSM/I) winds, conventional surface, and upper air meteorological observations. QSCAT wind alone was used in the third simulation (FDDAQSCAT), the SSM/I wind alone in the fourth (FDDASSMI) and the conventional observations alone in the fifth (FDDAAWS). The FDDAALL with assimilation of all observations, produced sea level pressure pattern closely agreeing with the analysis. Examination of various parameters indicated that the Control run over predicted the intensity of the storm with large error in its track and landfall position. The assimilation experiment with QSCAT winds performed marginally better than the one with SSM/I winds due to better representation of surface wind vectors. The FDDAALL outperformed all the simulations for the intensity, movement, and rainfall associated with the storm. Results suggest that the combination of land-based surface, upper air observations along with satellite winds for assimilation produced better prediction than the assimilation with individual data sets.

  4. Modeling the Relative GHG Emissions of Conventional and Shale Gas Production

    PubMed Central

    2011-01-01

    Recent reports show growing reserves of unconventional gas are available and that there is an appetite from policy makers, industry, and others to better understand the GHG impact of exploiting reserves such as shale gas. There is little publicly available data comparing unconventional and conventional gas production. Existing studies rely on national inventories, but it is not generally possible to separate emissions from unconventional and conventional sources within these totals. Even if unconventional and conventional sites had been listed separately, it would not be possible to eliminate site-specific factors to compare gas production methods on an equal footing. To address this difficulty, the emissions of gas production have instead been modeled. In this way, parameters common to both methods of production can be held constant, while allowing those parameters which differentiate unconventional gas and conventional gas production to vary. The results are placed into the context of power generation, to give a ″well-to-wire″ (WtW) intensity. It was estimated that shale gas typically has a WtW emissions intensity about 1.8–2.4% higher than conventional gas, arising mainly from higher methane releases in well completion. Even using extreme assumptions, it was found that WtW emissions from shale gas need be no more than 15% higher than conventional gas if flaring or recovery measures are used. In all cases considered, the WtW emissions of shale gas powergen are significantly lower than those of coal. PMID:22085088

  5. Modeling the relative GHG emissions of conventional and shale gas production.

    PubMed

    Stephenson, Trevor; Valle, Jose Eduardo; Riera-Palou, Xavier

    2011-12-15

    Recent reports show growing reserves of unconventional gas are available and that there is an appetite from policy makers, industry, and others to better understand the GHG impact of exploiting reserves such as shale gas. There is little publicly available data comparing unconventional and conventional gas production. Existing studies rely on national inventories, but it is not generally possible to separate emissions from unconventional and conventional sources within these totals. Even if unconventional and conventional sites had been listed separately, it would not be possible to eliminate site-specific factors to compare gas production methods on an equal footing. To address this difficulty, the emissions of gas production have instead been modeled. In this way, parameters common to both methods of production can be held constant, while allowing those parameters which differentiate unconventional gas and conventional gas production to vary. The results are placed into the context of power generation, to give a ″well-to-wire″ (WtW) intensity. It was estimated that shale gas typically has a WtW emissions intensity about 1.8-2.4% higher than conventional gas, arising mainly from higher methane releases in well completion. Even using extreme assumptions, it was found that WtW emissions from shale gas need be no more than 15% higher than conventional gas if flaring or recovery measures are used. In all cases considered, the WtW emissions of shale gas powergen are significantly lower than those of coal.

  6. Prognostic value of fluorine-18 fludeoxyglucose positron emission tomography parameters differs according to primary tumour location in small-cell lung cancer.

    PubMed

    Nobashi, Tomomi; Koyasu, Sho; Nakamoto, Yuji; Kubo, Takeshi; Ishimori, Takayoshi; Kim, Young H; Yoshizawa, Akihiko; Togashi, Kaori

    2016-01-01

    To investigate the prognostic value of fluorine-18 fludeoxyglucose (FDG) positron emission tomography (PET) parameters for small-cell lung cancer (SCLC), according to the primary tumour location, adjusted by conventional prognostic factors. From 2008 to 2013, we enrolled consecutive patients with histologically proven SCLC, who had undergone FDG-PET/CT prior to initial therapy. The primary tumour location was categorized into central or peripheral types. PET parameters and clinical variables were evaluated using univariate and multivariate analysis. A total of 69 patients were enrolled in this study; 28 of these patients were categorized as having the central type and 41 patients as having the peripheral type. In univariate analysis, stage, serum neuron-specific enolase, whole-body metabolic tumour volume (WB-MTV) and whole-body total lesion glycolysis (WB-TLG) were found to be significant in both types of patients. In multivariate analysis, the independent prognostic factor was found to be stage in the central type, but WB-MTV and WB-TLG in the peripheral type. Kaplan-Meier analysis demonstrated that patients with peripheral type with limited disease and low WB-MTV or WB-TLG showed significantly better overall survival than all of the other groups (p < 0.0083). The FDG-PET volumetric parameters were demonstrated to be significant and independent prognostic factors in patients with peripheral type of SCLC, while stage was the only independent prognostic factor in patients with central type of SCLC. FDG-PET is a non-invasive method that could potentially be used to estimate the prognosis of patients, especially those with peripheral-type SCLC.

  7. Identification of metabolites, clinical chemistry markers and transcripts associated with hepatotoxicity.

    PubMed

    Buness, Andreas; Roth, Adrian; Herrmann, Annika; Schmitz, Oliver; Kamp, Hennicke; Busch, Kristina; Suter, Laura

    2014-01-01

    Early and accurate pre-clinical and clinical biomarkers of hepatotoxicity facilitate the drug development process and the safety monitoring in clinical studies. We selected eight known model compounds to be administered to male Wistar rats to identify biomarkers of drug induced liver injury (DILI) using transcriptomics, metabolite profiling (metabolomics) and conventional endpoints. We specifically explored early biomarkers in serum and liver tissue associated with histopathologically evident acute hepatotoxicity. A tailored data analysis strategy was implemented to better differentiate animals with no treatment-related findings in the liver from animals showing evident hepatotoxicity as assessed by histopathological analysis. From the large number of assessed parameters, our data analysis strategy allowed us to identify five metabolites in serum and five in liver tissue, 58 transcripts in liver tissue and seven clinical chemistry markers in serum that were significantly associated with acute hepatotoxicity. The identified markers comprised metabolites such as taurocholic acid and putrescine (measured as sum parameter together with agmatine), classical clinical chemistry markers like AST (aspartate aminotransferase), ALT (alanine aminotransferase), and bilirubin, as well as gene transcripts like Igfbp1 (insulin-like growth factor-binding protein 1) and Egr1 (early growth response protein 1). The response pattern of the identified biomarkers was concordant across all types of parameters and sample matrices. Our results suggest that a combination of several of these biomarkers could significantly improve the robustness and accuracy of an early diagnosis of hepatotoxicity.

  8. Propagating modes in gain-guided optical fibers.

    PubMed

    Siegman, A E

    2003-08-01

    Optical fibers in which gain-guiding effects are significant or even dominant compared with conventional index guiding may become of practical interest for future high-power single-mode fiber lasers. I derive the propagation characteristics of symmetrical slab waveguides and cylindrical optical fibers having arbitrary amounts of mixed gain and index guiding, assuming a single uniform transverse profile for both the gain and the refractive-index steps. Optical fibers of this type are best characterized by using a complex-valued v-squared parameter in place of the real-valued v parameter commonly used to describe conventional index-guided optical fibers.

  9. No major differences found between the effects of microwave-based and conventional heat treatment methods on two different liquid foods.

    PubMed

    Géczi, Gábor; Horváth, Márk; Kaszab, Tímea; Alemany, Gonzalo Garnacho

    2013-01-01

    Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well.

  10. Soil Biological Activity Contributing to Phosphorus Availability in Vertisols under Long-Term Organic and Conventional Agricultural Management

    PubMed Central

    Bhat, Nisar A.; Riar, Amritbir; Ramesh, Aketi; Iqbal, Sanjeeda; Sharma, Mahaveer P.; Sharma, Sanjay K.; Bhullar, Gurbir S.

    2017-01-01

    Mobilization of unavailable phosphorus (P) to plant available P is a prerequisite to sustain crop productivity. Although most of the agricultural soils have sufficient amounts of phosphorus, low availability of native soil P remains a key limiting factor to increasing crop productivity. Solubilization and mineralization of applied and native P to plant available form is mediated through a number of biological and biochemical processes that are strongly influenced by soil carbon/organic matter, besides other biotic and abiotic factors. Soils rich in organic matter are expected to have higher P availability potentially due to higher biological activity. In conventional agricultural systems mineral fertilizers are used to supply P for plant growth, whereas organic systems largely rely on inputs of organic origin. The soils under organic management are supposed to be biologically more active and thus possess a higher capability to mobilize native or applied P. In this study we compared biological activity in soil of a long-term farming systems comparison field trial in vertisols under a subtropical (semi-arid) environment. Soil samples were collected from plots under 7 years of organic and conventional management at five different time points in soybean (Glycine max) -wheat (Triticum aestivum) crop sequence including the crop growth stages of reproductive significance. Upon analysis of various soil biological properties such as dehydrogenase, β-glucosidase, acid and alkaline phosphatase activities, microbial respiration, substrate induced respiration, soil microbial biomass carbon, organically managed soils were found to be biologically more active particularly at R2 stage in soybean and panicle initiation stage in wheat. We also determined the synergies between these biological parameters by using the methodology of principle component analysis. At all sampling points, P availability in organic and conventional systems was comparable. Our findings clearly indicate that owing to higher biological activity, organic systems possess equal capabilities of supplying P for crop growth as are conventional systems with inputs of mineral P fertilizers. PMID:28928758

  11. No Major Differences Found between the Effects of Microwave-Based and Conventional Heat Treatment Methods on Two Different Liquid Foods

    PubMed Central

    Géczi, Gábor; Horváth, Márk; Kaszab, Tímea; Alemany, Gonzalo Garnacho

    2013-01-01

    Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well. PMID:23341982

  12. Performance of the high-sensitivity troponin assay in diagnosing acute myocardial infarction: systematic review and meta-analysis

    PubMed Central

    Al-Saleh, Ayman; Alazzoni, Ashraf; Al Shalash, Saleh; Ye, Chenglin; Mbuagbaw, Lawrence; Thabane, Lehana; Jolly, Sanjit S.

    2014-01-01

    Background High-sensitivity cardiac troponin assays have been adopted by many clinical centres worldwide; however, clinicians are uncertain how to interpret the results. We sought to assess the utility of these assays in diagnosing acute myocardial infarction (MI). Methods We carried out a systematic review and meta-analysis of studies comparing high-sensitivity with conventional assays of cardiac troponin levels among adults with suspected acute MI in the emergency department. We searched MEDLINE, EMBASE and Cochrane databases up to April 2013 and used bivariable random-effects modelling to obtain summary parameters for diagnostic accuracy. Results We identified 9 studies that assessed the use of high-sensitivity troponin T assays (n = 9186 patients). The summary sensitivity of these tests in diagnosing acute MI at presentation to the emergency department was estimated to be 0.94 (95% confidence interval [CI] 0.89–0.97); for conventional tests, it was 0.72 (95% CI 0.63–0.79). The summary specificity was 0.73 (95% CI 0.64–0.81) for the high-sensitivity assay compared with 0.95 (95% CI 0.93–0.97) for the conventional assay. The differences in estimates of the summary sensitivity and specificity between the high-sensitivity and conventional assays were statistically significant (p < 0.01). The area under the curve was similar for both tests carried out 3–6 hours after presentation. Three studies assessed the use of high-sensitivity troponin I assays and showed similar results. Interpretation Used at presentation to the emergency department, the high-sensitivity cardiac troponin assay has improved sensitivity, but reduced specificity, compared with the conventional troponin assay. With repeated measurements over 6 hours, the area under the curve is similar for both tests, indicating that the major advantage of the high-sensitivity test is early diagnosis. PMID:25295240

  13. Optimization of T2-weighted imaging for shoulder magnetic resonance arthrography by synthetic magnetic resonance imaging.

    PubMed

    Lee, Seung Hyun; Lee, Young Han; Hahn, Seok; Yang, Jaemoon; Song, Ho-Taek; Suh, Jin-Suck

    2017-01-01

    Background Synthetic magnetic resonance imaging (MRI) allows reformatting of various synthetic images by adjustment of scanning parameters such as repetition time (TR) and echo time (TE). Optimized MR images can be reformatted from T1, T2, and proton density (PD) values to achieve maximum tissue contrast between joint fluid and adjacent soft tissue. Purpose To demonstrate the method for optimization of TR and TE by synthetic MRI and to validate the optimized images by comparison with conventional shoulder MR arthrography (MRA) images. Material and Methods Thirty-seven shoulder MRA images acquired by synthetic MRI were retrospectively evaluated for PD, T1, and T2 values at the joint fluid and glenoid labrum. Differences in signal intensity between the fluid and labrum were observed between TR of 500-6000 ms and TE of 80-300 ms in T2-weighted (T2W) images. Conventional T2W and synthetic images were analyzed for diagnostic agreement of supraspinatus tendon abnormalities (kappa statistics) and image quality scores (one-way analysis of variance with post-hoc analysis). Results Optimized mean values of TR and TE were 2724.7 ± 1634.7 and 80.1 ± 0.4, respectively. Diagnostic agreement for supraspinatus tendon abnormalities between conventional and synthetic MR images was excellent (κ = 0.882). The mean image quality score of the joint space in optimized synthetic images was significantly higher compared with those in conventional and synthetic images (2.861 ± 0.351 vs. 2.556 ± 0.607 vs. 2.750 ± 0.439; P < 0.05). Conclusion Synthetic MRI with optimized TR and TE for shoulder MRA enables optimization of soft-tissue contrast.

  14. Comparison of the performance of tracer kinetic model-driven registration for dynamic contrast enhanced MRI using different models of contrast enhancement.

    PubMed

    Buonaccorsi, Giovanni A; Roberts, Caleb; Cheung, Sue; Watson, Yvonne; O'Connor, James P B; Davies, Karen; Jackson, Alan; Jayson, Gordon C; Parker, Geoff J M

    2006-09-01

    The quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data is subject to model fitting errors caused by motion during the time-series data acquisition. However, the time-varying features that occur as a result of contrast enhancement can confound motion correction techniques based on conventional registration similarity measures. We have therefore developed a heuristic, locally controlled tracer kinetic model-driven registration procedure, in which the model accounts for contrast enhancement, and applied it to the registration of abdominal DCE-MRI data at high temporal resolution. Using severely motion-corrupted data sets that had been excluded from analysis in a clinical trial of an antiangiogenic agent, we compared the results obtained when using different models to drive the tracer kinetic model-driven registration with those obtained when using a conventional registration against the time series mean image volume. Using tracer kinetic model-driven registration, it was possible to improve model fitting by reducing the sum of squared errors but the improvement was only realized when using a model that adequately described the features of the time series data. The registration against the time series mean significantly distorted the time series data, as did tracer kinetic model-driven registration using a simpler model of contrast enhancement. When an appropriate model is used, tracer kinetic model-driven registration influences motion-corrupted model fit parameter estimates and provides significant improvements in localization in three-dimensional parameter maps. This has positive implications for the use of quantitative DCE-MRI for example in clinical trials of antiangiogenic or antivascular agents.

  15. The dependence of Islamic and conventional stocks: A copula approach

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2015-09-01

    Recent studies have found that Islamic stocks are dependent on conventional stocks and they appear to be more risky. In Asia, particularly in Islamic countries, research on dependence involving Islamic and non-Islamic stock markets is limited. The objective of this study is to investigate the dependence between financial times stock exchange Hijrah Shariah index and conventional stocks (EMAS and KLCI indices). Using the copula approach and a time series model for each marginal distribution function, the copula parameters were estimated. The Elliptical copula was selected to present the dependence structure of each pairing of the Islamic stock and conventional stock. Specifically, the Islamic versus conventional stocks (Shariah-EMAS and Shariah-KLCI) had lower dependence compared to conventional versus conventional stocks (EMAS-KLCI). These findings suggest that the occurrence of shocks in a conventional stock will not have strong impact on the Islamic stock.

  16. Paresthesia-Free High-Density Spinal Cord Stimulation for Postlaminectomy Syndrome in a Prescreened Population: A Prospective Case Series.

    PubMed

    Sweet, Jennifer; Badjatiya, Anish; Tan, Daniel; Miller, Jonathan

    2016-04-01

    Spinal cord stimulation (SCS) traditionally is thought to require paresthesia, but there is evidence that paresthesia-free stimulation using high-density (HD) parameters might also be effective. The purpose of this study is to evaluate relative effectiveness of conventional, subthreshold HD, and sham stimulation on pain intensity and quality of life. Fifteen patients with response to conventional stimulation (60 Hz/350 μsec) were screened with a one-week trial of subthreshold HD (1200 Hz/200 μsec/amplitude 90% paresthesia threshold) and enrolled if there was at least 50% reduction on visual analog scale (VAS) for pain. Subjects were randomized into two groups and treated with four two-week periods of conventional, subthreshold HD, and sham stimulation in a randomized crossover design. Four of 15 patients responded to subthreshold HD stimulation. Mean VAS during conventional, subthreshold HD, and sham stimulation was 5.32 ± 0.63, 2.29 ± 0.41, and 6.31 ± 1.22, respectively. There was a significant difference in pain scores during the blinded crossover study of subthreshold HD vs. sham stimulation (p < 0.05, Student's t-test). Post hoc analysis revealed that subjects reported significantly greater attention to pain during conventional stimulation compared with subthreshold HD stimulation (p < 0.05, Student's t-test). All subjects reported a positive impression of change for subthreshold HD stimulation compared with conventional stimulation, and there was a trend toward greater likelihood for response to subthreshold HD stimulation in comparison with sham stimulation (p = 0.07, Fisher's exact test). At the end of the trial, all subjects elected to continue to receive subthreshold HD stimulation rather than conventional stimulation. Paresthesia are not necessary for pain relief using commercially available SCS devices, and may actually increase attention to pain. Subthreshold HD SCS represents a viable alternative to conventional stimulation among patients who are confirmed to have a clinical response to it. © 2015 International Neuromodulation Society.

  17. Trend analysis of body weight parameters, mortality, and incidence of spontaneous tumors in Tg.rasH2 mice.

    PubMed

    Paranjpe, Madhav G; Denton, Melissa D; Vidmar, Tom; Elbekai, Reem H

    2014-01-01

    Carcinogenicity studies have been performed in conventional 2-year rodent studies for at least 3 decades, whereas the short-term carcinogenicity studies in transgenic mice, such as Tg.rasH2, have only been performed over the last decade. In the 2-year conventional rodent studies, interlinked problems, such as increasing trends in the initial body weights, increased body weight gains, high incidence of spontaneous tumors, and low survival, that complicate the interpretation of findings have been well established. However, these end points have not been evaluated in the short-term carcinogenicity studies involving the Tg.rasH2 mice. In this article, we present retrospective analysis of data obtained from control groups in 26-week carcinogenicity studies conducted in Tg.rasH2 mice since 2004. Our analysis showed statistically significant decreasing trends in initial body weights of both sexes. Although the terminal body weights did not show any significant trends, there was a statistically significant increasing trend toward body weight gains, more so in males than in females, which correlated with increasing trends in the food consumption. There were no statistically significant alterations in mortality trends. In addition, the incidence of all common spontaneous tumors remained fairly constant with no statistically significant differences in trends. © The Author(s) 2014.

  18. Application of dynamic programming to evaluate the slope stability of a vertical extension to a balefill.

    PubMed

    Kremen, Arie; Tsompanakis, Yiannis

    2010-04-01

    The slope-stability of a proposed vertical extension of a balefill was investigated in the present study, in an attempt to determine a geotechnically conservative design, compliant with New Jersey Department of Environmental Protection regulations, to maximize the utilization of unclaimed disposal capacity. Conventional geotechnical analytical methods are generally limited to well-defined failure modes, which may not occur in landfills or balefills due to the presence of preferential slip surfaces. In addition, these models assume an a priori stress distribution to solve essentially indeterminate problems. In this work, a different approach has been applied, which avoids several of the drawbacks of conventional methods. Specifically, the analysis was performed in a two-stage process: (a) calculation of stress distribution, and (b) application of an optimization technique to identify the most probable failure surface. The stress analysis was performed using a finite element formulation and the location of the failure surface was located by dynamic programming optimization method. A sensitivity analysis was performed to evaluate the effect of the various waste strength parameters of the underlying mathematical model on the results, namely the factor of safety of the landfill. Although this study focuses on the stability investigation of an expanded balefill, the methodology presented can easily be applied to general geotechnical investigations.

  19. GEMPAK5 user's guide, version 5.0

    NASA Technical Reports Server (NTRS)

    Desjardins, Mary L.; Brill, Keith F.; Schotz, Steven S.

    1991-01-01

    GEMPAK is a general meteorological software package used to analyze and display conventional meteorological data as well as satellite derived parameters. The User's Guide describes the GEMPAK5 programs and input parameters and details the algorithms used for the meteorological computations.

  20. Report on the study of the tax and rate treatment of renewable energy projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, S.W.; Hill, L.J.; Perlack, R.D.

    1993-12-01

    This study was conducted in response to the requirements of Section 1205 of the Energy Policy Act of 1992 (EPACT), which states: The Secretary (of Energy), in conjunction with State regulatory commissions, shall undertake a study to determine if conventional taxation and ratemaking procedures result in economic barriers to or incentives for renewable energy power plants compared to conventional power plants. The purpose of the study, therefore, is not to compare the cost-effectiveness of different types of renewable and conventional electric generating plants. Rather, it is to determine the relative impact of conventional ratemaking and taxation procedures on the selectionmore » of renewable power plants compared to conventional ones. To make this determination, we quantify the technical and financial parameters of renewable and conventional electric generating technologies, and hold them fixed throughout the study. Then, we vary taxation and ratemaking procedures to determine their effects on the financial criteria that investor-owned electric utilities (IOUs) and nonutility electricity generators (NUGs) use to make technology-adoption decisions. In the planning process of a typical utility, the opposite is usually the case. That is, utilities typically hold ratemaking and taxation procedures constant and look for the least-cost mix of resources, varying the values of engineering and financial parameters of generating plants in the process.« less

  1. A comparison of 3D poly(ε-caprolactone) tissue engineering scaffolds produced with conventional and additive manufacturing techniques by means of quantitative analysis of SR μ-CT images

    NASA Astrophysics Data System (ADS)

    Brun, F.; Intranuovo, F.; Mohammadi, S.; Domingos, M.; Favia, P.; Tromba, G.

    2013-07-01

    The technique used to produce a 3D tissue engineering (TE) scaffold is of fundamental importance in order to guarantee its proper morphological characteristics. An accurate assessment of the resulting structural properties is therefore crucial in order to evaluate the effectiveness of the produced scaffold. Synchrotron radiation (SR) computed microtomography (μ-CT) combined with further image analysis seems to be one of the most effective techniques to this aim. However, a quantitative assessment of the morphological parameters directly from the reconstructed images is a non trivial task. This study considers two different poly(ε-caprolactone) (PCL) scaffolds fabricated with a conventional technique (Solvent Casting Particulate Leaching, SCPL) and an additive manufacturing (AM) technique (BioCell Printing), respectively. With the first technique it is possible to produce scaffolds with random, non-regular, rounded pore geometry. The AM technique instead is able to produce scaffolds with square-shaped interconnected pores of regular dimension. Therefore, the final morphology of the AM scaffolds can be predicted and the resulting model can be used for the validation of the applied imaging and image analysis protocols. It is here reported a SR μ-CT image analysis approach that is able to effectively and accurately reveal the differences in the pore- and throat-size distributions as well as connectivity of both AM and SCPL scaffolds.

  2. Evaluation of photoshop based image analysis in cytologic diagnosis of pleural fluid in comparison with conventional modalities.

    PubMed

    Jafarian, Amir Hossein; Tasbandi, Aida; Mohamadian Roshan, Nema

    2018-04-19

    The aim of this study is to investigate and compare the results of digital image analysis in pleural effusion cytology samples with conventional modalities. In this cross-sectional study, 53 pleural fluid cytology smears from Qaem hospital pathology department, located in Mashhad, Iran were investigated. Prior to digital analysis, all specimens were evaluated by two pathologists and categorized into three groups as: benign, suspicious, and malignant. Using an Olympus microscope and Olympus DP3 digital camera, digital images from cytology slides were captured. Appropriate images (n = 130) were separately imported to Adobe Photoshop CS5 and parameters including area and perimeter, circularity, Gray Value mean, integrated density, and nucleus to cytoplasm area ratio were analyzed. Gray Value mean, nucleus to cytoplasm area ratio, and circularity showed the best sensitivity and specificity rates as well as significant differences between all groups. Also, nucleus area and perimeter showed a significant relation between suspicious and malignant groups with benign group. Whereas, there was no such difference between suspicious and malignant groups. We concluded that digital image analysis is welcomed in the field of research on pleural fluid smears as it can provide quantitative data to apply various comparisons and reduce interobserver variation which could assist pathologists to achieve a more accurate diagnosis. © 2018 Wiley Periodicals, Inc.

  3. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 2; Constant Stress Rate Experiments

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  4. Dynamics of modulated beams in spectral domain

    DOE PAGES

    Yampolsky, Nikolai A.

    2017-07-16

    General formalism for describing dynamics of modulated beams along linear beamlines is developed. We describe modulated beams with spectral distribution function which represents Fourier transform of the conventional beam distribution function in the 6-dimensional phase space. The introduced spectral distribution function is localized in some region of the spectral domain for nearly monochromatic modulations. It can be characterized with a small number of typical parameters such as the lowest order moments of the spectral distribution. We study evolution of the modulated beams in linear beamlines and find that characteristic spectral parameters transform linearly. The developed approach significantly simplifies analysis ofmore » various schemes proposed for seeding X-ray free electron lasers. We use this approach to study several recently proposed schemes and find the bandwidth of the output bunching in each case.« less

  5. Applicability of pedometry and accelerometry in the calculation of energy expenditure during walking and Nordic walking among women in relation to their exercise heart rate.

    PubMed

    Polechoński, Jacek; Mynarski, Władysław; Nawrocka, Agnieszka

    2015-11-01

    [Purpose] The objective of this study was to evaluate the usefulness of pedometry and accelerometry in the measurement of the energy expenditures in Nordic walking and conventional walking as diagnostic parameters. [Subjects and Methods] The study included 20 female students (age, 24 ± 2.3 years). The study used three types of measuring devices, namely a heart rate monitor (Polar S610i), a Caltrac accelerometer, and a pedometer (Yamax SW-800). The walking pace at the level of 110 steps/min was determined by using a metronome. [Results] The students who walked with poles covered a distance of 1,000 m at a speed 36.3 sec faster and with 65.5 fewer steps than in conventional walking. Correlation analysis revealed a moderate interrelationship between the results obtained with a pedometer and those obtained with an accelerometer during Nordic walking (r = 0.55) and a high correlation during conventional walking (r = 0.85). [Conclusion] A pedometer and Caltrac accelerometer should not be used as alternative measurement instruments in the comparison of energy expenditure in Nordic walking.

  6. Applicability of pedometry and accelerometry in the calculation of energy expenditure during walking and Nordic walking among women in relation to their exercise heart rate

    PubMed Central

    Polechoński, Jacek; Mynarski, Władysław; Nawrocka, Agnieszka

    2015-01-01

    [Purpose] The objective of this study was to evaluate the usefulness of pedometry and accelerometry in the measurement of the energy expenditures in Nordic walking and conventional walking as diagnostic parameters. [Subjects and Methods] The study included 20 female students (age, 24 ± 2.3 years). The study used three types of measuring devices, namely a heart rate monitor (Polar S610i), a Caltrac accelerometer, and a pedometer (Yamax SW-800). The walking pace at the level of 110 steps/min was determined by using a metronome. [Results] The students who walked with poles covered a distance of 1,000 m at a speed 36.3 sec faster and with 65.5 fewer steps than in conventional walking. Correlation analysis revealed a moderate interrelationship between the results obtained with a pedometer and those obtained with an accelerometer during Nordic walking (r = 0.55) and a high correlation during conventional walking (r = 0.85). [Conclusion] A pedometer and Caltrac accelerometer should not be used as alternative measurement instruments in the comparison of energy expenditure in Nordic walking. PMID:26696730

  7. Characterization of Bacterial Communities in Venous Insufficiency Wounds by Use of Conventional Culture and Molecular Diagnostic Methods▿

    PubMed Central

    Tuttle, Marie S.; Mostow, Eliot; Mukherjee, Pranab; Hu, Fen Z.; Melton-Kreft, Rachael; Ehrlich, Garth D.; Dowd, Scot E.; Ghannoum, Mahmoud A.

    2011-01-01

    Microbial infections delay wound healing, but the effect of the composition of the wound microbiome on healing parameters is unknown. To better understand bacterial communities in chronic wounds, we analyzed debridement samples from lower-extremity venous insufficiency ulcers using the following: conventional anaerobic and aerobic bacterial cultures; the Ibis T5000 universal biosensor (Abbott Molecular); and 16S 454 FLX titanium series pyrosequencing (Roche). Wound debridement samples were obtained from 10 patients monitored clinically for at least 6 months, at which point 5 of the 10 sampled wounds had healed. Pyrosequencing data revealed significantly higher bacterial abundance and diversity in wounds that had not healed at 6 months. Additionally, Actinomycetales was increased in wounds that had not healed, and Pseudomonadaceae was increased in wounds that had healed by the 6-month follow-up. Baseline wound surface area, duration, or analysis by Ibis or conventional culture did not reveal significant differences between wounds that healed after 6 months and those that did not. Thus, pyrosequencing identified distinctive baseline characteristics of wounds that did not heal by the 6-month follow-up, furthering our understanding of potentially unique microbiome characteristics of chronic wounds. PMID:21880958

  8. Volatile Organic Compounds (VOCs) in Conventional and High Performance School Buildings in the U.S.

    PubMed

    Zhong, Lexuan; Su, Feng-Chiao; Batterman, Stuart

    2017-01-21

    Exposure to volatile organic compounds (VOCs) has been an indoor environmental quality (IEQ) concern in schools and other buildings for many years. Newer designs, construction practices and building materials for "green" buildings and the use of "environmentally friendly" products have the promise of lowering chemical exposure. This study examines VOCs and IEQ parameters in 144 classrooms in 37 conventional and high performance elementary schools in the U.S. with the objectives of providing a comprehensive analysis and updating the literature. Tested schools were built or renovated in the past 15 years, and included comparable numbers of conventional, Energy Star, and Leadership in Energy and Environmental Design (LEED)-certified buildings. Indoor and outdoor VOC samples were collected and analyzed by thermal desorption, gas chromatography and mass spectroscopy for 94 compounds. Aromatics, alkanes and terpenes were the major compound groups detected. Most VOCs had mean concentrations below 5 µg/m³, and most indoor/outdoor concentration ratios ranged from one to 10. For 16 VOCs, the within-school variance of concentrations exceeded that between schools and, overall, no major differences in VOC concentrations were found between conventional and high performance buildings. While VOC concentrations have declined from levels measured in earlier decades, opportunities remain to improve indoor air quality (IAQ) by limiting emissions from building-related sources and by increasing ventilation rates.

  9. Volatile Organic Compounds (VOCs) in Conventional and High Performance School Buildings in the U.S.

    PubMed Central

    Zhong, Lexuan; Su, Feng-Chiao; Batterman, Stuart

    2017-01-01

    Exposure to volatile organic compounds (VOCs) has been an indoor environmental quality (IEQ) concern in schools and other buildings for many years. Newer designs, construction practices and building materials for “green” buildings and the use of “environmentally friendly” products have the promise of lowering chemical exposure. This study examines VOCs and IEQ parameters in 144 classrooms in 37 conventional and high performance elementary schools in the U.S. with the objectives of providing a comprehensive analysis and updating the literature. Tested schools were built or renovated in the past 15 years, and included comparable numbers of conventional, Energy Star, and Leadership in Energy and Environmental Design (LEED)-certified buildings. Indoor and outdoor VOC samples were collected and analyzed by thermal desorption, gas chromatography and mass spectroscopy for 94 compounds. Aromatics, alkanes and terpenes were the major compound groups detected. Most VOCs had mean concentrations below 5 µg/m3, and most indoor/outdoor concentration ratios ranged from one to 10. For 16 VOCs, the within-school variance of concentrations exceeded that between schools and, overall, no major differences in VOC concentrations were found between conventional and high performance buildings. While VOC concentrations have declined from levels measured in earlier decades, opportunities remain to improve indoor air quality (IAQ) by limiting emissions from building-related sources and by increasing ventilation rates. PMID:28117727

  10. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  11. Automated palpation for breast tissue discrimination based on viscoelastic biomechanical properties.

    PubMed

    Tsukune, Mariko; Kobayashi, Yo; Miyashita, Tomoyuki; Fujie, G Masakatsu

    2015-05-01

    Accurate, noninvasive methods are sought for breast tumor detection and diagnosis. In particular, a need for noninvasive techniques that measure both the nonlinear elastic and viscoelastic properties of breast tissue has been identified. For diagnostic purposes, it is important to select a nonlinear viscoelastic model with a small number of parameters that highly correlate with histological structure. However, the combination of conventional viscoelastic models with nonlinear elastic models requires a large number of parameters. A nonlinear viscoelastic model of breast tissue based on a simple equation with few parameters was developed and tested. The nonlinear viscoelastic properties of soft tissues in porcine breast were measured experimentally using fresh ex vivo samples. Robotic palpation was used for measurements employed in a finite element model. These measurements were used to calculate nonlinear viscoelastic parameters for fat, fibroglandular breast parenchyma and muscle. The ability of these parameters to distinguish the tissue types was evaluated in a two-step statistical analysis that included Holm's pairwise [Formula: see text] test. The discrimination error rate of a set of parameters was evaluated by the Mahalanobis distance. Ex vivo testing in porcine breast revealed significant differences in the nonlinear viscoelastic parameters among combinations of three tissue types. The discrimination error rate was low among all tested combinations of three tissue types. Although tissue discrimination was not achieved using only a single nonlinear viscoelastic parameter, a set of four nonlinear viscoelastic parameters were able to reliably and accurately discriminate fat, breast fibroglandular tissue and muscle.

  12. Comparison of scoliosis measurements based on three-dimensional vertebra vectors and conventional two-dimensional measurements: advantages in evaluation of prognosis and surgical results.

    PubMed

    Illés, Tamás; Somoskeöy, Szabolcs

    2013-06-01

    A new concept of vertebra vectors based on spinal three-dimensional (3D) reconstructions of images from the EOS system, a new low-dose X-ray imaging device, was recently proposed to facilitate interpretation of EOS 3D data, especially with regard to horizontal plane images. This retrospective study was aimed at the evaluation of the spinal layout visualized by EOS 3D and vertebra vectors before and after surgical correction, the comparison of scoliotic spine measurement values based on 3D vertebra vectors with measurements using conventional two-dimensional (2D) methods, and an evaluation of horizontal plane vector parameters for their relationship with the magnitude of scoliotic deformity. 95 patients with adolescent idiopathic scoliosis operated according to the Cotrel-Dubousset principle were subjected to EOS X-ray examinations pre- and postoperatively, followed by 3D reconstructions and generation of vertebra vectors in a calibrated coordinate system to calculate vector coordinates and parameters, as published earlier. Differences in values of conventional 2D Cobb methods and methods based on vertebra vectors were evaluated by means comparison T test and relationship of corresponding parameters was analysed by bivariate correlation. Relationship of horizontal plane vector parameters with the magnitude of scoliotic deformities and results of surgical correction were analysed by Pearson correlation and linear regression. In comparison to manual 2D methods, a very close relationship was detectable in vertebra vector-based curvature data for coronal curves (preop r 0.950, postop r 0.935) and thoracic kyphosis (preop r 0.893, postop r 0.896), while the found small difference in L1-L5 lordosis values (preop r 0.763, postop r 0.809) was shown to be strongly related to the magnitude of corresponding L5 wedge. The correlation analysis results revealed strong correlation between the magnitude of scoliosis and the lateral translation of apical vertebra in horizontal plane. The horizontal plane coordinates of the terminal and initial points of apical vertebra vectors represent this (r 0.701; r 0.667). Less strong correlation was detected in the axial rotation of apical vertebras and the magnitudes of the frontal curves (r 0.459). Vertebra vectors provide a key opportunity to visualize spinal deformities in all three planes simultaneously. Measurement methods based on vertebral vectors proved to be just as accurate and reliable as conventional measurement methods for coronal and sagittal plane parameters. In addition, the horizontal plane display of the curves can be studied using the same vertebra vectors. Based on the vertebra vectors data, during the surgical treatment of spinal deformities, the diminution of the lateral translation of the vertebras seems to be more important in the results of the surgical correction than the correction of the axial rotation.

  13. Reliability of conventional shade guides in teeth color determination.

    PubMed

    Todorović, Ana; Todorović, Aleksandar; Gostović, Aleksandra Spadijer; Lazić, Vojkan; Milicić, Biljana; Djurisić, Slobodan

    2013-10-01

    Color matching in prosthodontic therapy is a very important task because it influences the esthetic value of dental restorations. Visual shade matching represents the most frequently applied method in clinical practice. Instrumental measurements provide objective and quantified data in color assessment of natural teeth and restorations. In instrumental shade analysis, the goal is to achieve the smallest deltaE value possible, indicating the most accurate shade match. The aim of this study was to evaluate the reliability of commercially available ceramic shade guides. VITA Easyshade spectrophotometer (VITA, Germany) was used for instrumental color determination. Utilizing this device, color samples of ten VITA Classical and ten VITA 3D - Master shade guides were analyzed. Each color sample from all shade guides was measured three times and the basic parameters of color quality were examined: deltaL, deltaC, deltaH, deltaE, deltaElc. Based on these parameters spectrophotometer marks the shade matching as good, fair or adjust. After performing 1,248 measurements of ceramic color samples, frequency of evaluations adjust, fair and good were statistically significantly different between VITA Classical and VITA 3D Master shade guides (p = 0.002). There were 27.1% cases scored as adjust, 66.3% as fair and 6.7% as good. In VITA 3D - Master shade guides 30.9% cases were evaluated as adjust, 66.4% as fair and 2.7% cases as good. Color samples from different shade guides, produced by the same manufacturer, show variability in basic color parameters, which once again proves the lack of precision and nonuniformity of the conventional method.

  14. Continuous time limits of the utterance selection model

    NASA Astrophysics Data System (ADS)

    Michaud, Jérôme

    2017-02-01

    In this paper we derive alternative continuous time limits of the utterance selection model (USM) for language change [G. J. Baxter et al., Phys. Rev. E 73, 046118 (2006), 10.1103/PhysRevE.73.046118]. This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, cannot be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the heterogeneous mean field approximation. This approximation groups the behavior of nodes of the same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks: the regular networks and the star-shaped networks. The analysis reveals and quantifies a finite-size effect of the dynamics. If we increase the size of the network by keeping all the other parameters constant, we transition from a state where conventions emerge to a state where no convention emerges. Furthermore, we show that the degree of a node acts as a time scale. For heterogeneous networks such as star-shaped networks, the time scale difference can become very large, leading to a noisier behavior of highly connected nodes.

  15. Do the conventional clinicopathologic parameters predict for response and survival in head and neck cancer patients undergoing neoadjuvant chemotherapy?

    PubMed

    Fonseca, E; Cruz, J J; Dueñas, A; Gómez, A; Sánchez, P; Martín, G; Nieto, A; Soria, P; Muñoz, A; Gómez, J L; Pardal, J L

    1996-01-01

    Neoadjuvant chemotherapy for head and neck carcinoma is still an important treatment modality. The prognostic value of patient and tumor parameters has been extensively evaluated in several trials, yielding mixed results. We report the prognostic factors emerging from a group of patients undergoing neoadjuvant chemotherapy. From April 1986 to June 1992, 149 consecutive patients received cisplatin-5-fluorouracil-based neoadjuvant chemotherapy. After four courses of chemotherapy, patients underwent local-regional treatment with surgery, radiation or both. A variety of patient and tumor characteristics were evaluated as predictors for response to chemotherapy and survival. The complete response, partial response and no response rates to NAC were 52%, 33% and 15%, respectively. No parameters predicted response to chemotherapy. At a maximum follow-up of 87 months, overall survival was 39% and disease-free survival was 49%. Variables shown to be predictors of survival in univariate analyses were age, performance status, histology, site, T, N, stage, and response to chemotherapy. Using the Cox regression analysis, only complete response to induction chemotherapy (P = 0.0006), performance status (P = 0.03), stage (P = 0.01), age (P = 0.03) and primary tumor site (P = 0.04) emerged as independent prognostic factors for survival. Complete response to chemotherapy was confirmed as the strongest prognostic factor influencing survival. However, conventional clinicopathologic factors did not predict response, hence, potential prognostic biologic and molecular factors for response must be sought. At present, much effort must be made for the improvement of the complete response rate, which seems to be a requisite to prolong survival.

  16. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  17. Investigation on the properties of omnidirectional photonic band gaps in two-dimensional plasma photonic crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hai-Feng, E-mail: hanlor@163.com; Nanjing Artillery Academy, Nanjing 211132; Liu, Shao-Bin

    2016-01-15

    The properties of omnidirectional photonic band gaps (OBGs) in two-dimensional plasma photonic crystals (2D PPCs) are theoretically investigated by the modified plane wave expansion method. In the simulation, we consider the off-plane incident wave vector. The configuration of 2D PPCs is the triangular lattices filled with the nonmagnetized plasma cylinders in the homogeneous and isotropic dielectric background. The calculated results show that the proposed 2D PPCs possess a flatbands region and the OBGs. Compared with the OBGs in the conventional 2D dielectric-air PCs, it can be obtained more easily and enlarged in the 2D PPCs with a similar structure. Themore » effects of configurational parameters of the PPCs on the OBGs also are studied. The simulated results demonstrate that the locations of OBGs can be tuned easily by manipulating those parameters except for changing plasma collision frequency. The achieved OBGs can be enlarged by optimizations. The OBGs of two novel configurations of PPCs with different cross sections are computed for a comparison. Both configurations have the advantages of obtaining the larger OBGs compared with the conventional configuration, since the symmetry of 2D PPCs is broken by different sizes of periodically inserted plasma cylinders or connected by the embedded plasma cylinders with thin veins. The analysis of the results shows that the bandwidths of OBGs can be tuned by changing geometric and physical parameters of such two PPCs structures. The theoretical results may open a new scope for designing the omnidirectional reflectors or mirrors based on the 2D PPCs.« less

  18. Experimental Verification of the Rudder-Free Stability Theory for an Airplane Model Equipped with Rudders Having Negative Floating Tendency and Negligible Friction

    NASA Technical Reports Server (NTRS)

    McKinney, Marion O.; Maggin, Bernard

    1944-01-01

    An investigation has been made in the Langley free-flight tunnel to obtain an experimental verification of the theoretical rudder-free stability characteristics of an airplane model equipped with conventional rudders having negative floating tendencies and negligible friction. The model used in the tests was equipped with a conventional single vertical tail having rudder area 40 percent of the vertical tail area. The model was tested both in free flight and mounted on a strut that allowed freedom only in yaw. Tests were made with three different amounts of rudder aerodynamic balance and with various values of mass, moment of inertia, and center-of-gravity location of the rudder. Most of the stability derivatives required for the theoretical calculations were determined from forced and free-oscillation tests of the particular model tested. The theoretical analysis showed that the rudder-free motions of an airplane consist largely of two oscillatory modes - a long-period oscillation somewhat similar to the normal rudder-fixed oscillation and a short-period oscillation introduced only when the rudder is set free. It was found possible in the tests to create lateral instability of the rudder-free short-period mode by large values of rudder mass parameters even though the rudder-fixed condition was highly stable. The results of the tests and calculation indicated that for most present-day airplanes having rudders of negative floating tendency, the rudder-free stability characteristics may be examined by simply considering the dynamic lateral stability using the value of the directional-stability parameter Cn(sub p) for the rudder-free condition in the conventional controls-fixed lateral-stability equations. For very large airplanes having relatively high values of the rudder mass parameters with respect to the rudder aerodynamic parameters, however, analysis of the rudder-free stability should be made with the complete equations of motion. Good agreement between calculated and measured rudder-free stability characteristics was obtained by use of the general rudder-free stability theory, in which four degrees of lateral freedom are considered. When this assumption is made that the rolling motions alone or the lateral and rolling motions may be neglected in the calculations of rudder-free stability, it is possible to predict satisfactorily the characteristics of the long-period (Dutch roll type) rudder-free oscillation for airplanes only when the effective-dihedral angle is small. With these simplifying assumptions, however, satisfactory prediction of the short-period oscillation may be obtained for any dihedral. Further simplification of the theory based on the assumption that the rudder moment of inertia might be disregarded was found to be invalid because this assumption made it impossible to calculate the characteristics of the short-period oscillations.

  19. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  20. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach.

    PubMed

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges

    2013-10-01

    Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.

  1. Rapid analysis of charge variants of monoclonal antibodies using non-linear salt gradient in cation-exchange high performance liquid chromatography.

    PubMed

    Joshi, Varsha; Kumar, Vijesh; Rathore, Anurag S

    2015-08-07

    A method is proposed for rapid development of a short, analytical cation exchange high performance liquid chromatography method for analysis of charge heterogeneity in monoclonal antibody products. The parameters investigated and optimized include pH, shape of elution gradient and length of the column. It is found that the most important parameter for development of a shorter method is the choice of the shape of elution gradient. In this paper, we propose a step by step approach to develop a non-linear sigmoidal shape gradient for analysis of charge heterogeneity for two different monoclonal antibody products. The use of this gradient not only decreases the run time of the method to 4min against the conventional method that takes more than 40min but also the resolution is retained. Superiority of the phosphate gradient over sodium chloride gradient for elution of mAbs is also observed. The method has been successfully evaluated for specificity, sensitivity, linearity, limit of detection, and limit of quantification. Application of this method as a potential at-line process analytical technology tool has been suggested. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Muscle contraction phenotypic analysis enabled by optogenetics reveals functional relationships of sarcomere components in Caenorhabditis elegans.

    PubMed

    Hwang, Hyundoo; Barnes, Dawn E; Matsunaga, Yohei; Benian, Guy M; Ono, Shoichiro; Lu, Hang

    2016-01-29

    The sarcomere, the fundamental unit of muscle contraction, is a highly-ordered complex of hundreds of proteins. Despite decades of genetics work, the functional relationships and the roles of those sarcomeric proteins in animal behaviors remain unclear. In this paper, we demonstrate that optogenetic activation of the motor neurons that induce muscle contraction can facilitate quantitative studies of muscle kinetics in C. elegans. To increase the throughput of the study, we trapped multiple worms in parallel in a microfluidic device and illuminated for photoactivation of channelrhodopsin-2 to induce contractions in body wall muscles. Using image processing, the change in body size was quantified over time. A total of five parameters including rate constants for contraction and relaxation were extracted from the optogenetic assay as descriptors of sarcomere functions. To potentially relate the genes encoding the sarcomeric proteins functionally, a hierarchical clustering analysis was conducted on the basis of those parameters. Because it assesses physiological output different from conventional assays, this method provides a complement to the phenotypic analysis of C. elegans muscle mutants currently performed in many labs; the clusters may provide new insights and drive new hypotheses for functional relationships among the many sarcomere components.

  3. Muscle contraction phenotypic analysis enabled by optogenetics reveals functional relationships of sarcomere components in Caenorhabditis elegans

    NASA Astrophysics Data System (ADS)

    Hwang, Hyundoo; Barnes, Dawn E.; Matsunaga, Yohei; Benian, Guy M.; Ono, Shoichiro; Lu, Hang

    2016-01-01

    The sarcomere, the fundamental unit of muscle contraction, is a highly-ordered complex of hundreds of proteins. Despite decades of genetics work, the functional relationships and the roles of those sarcomeric proteins in animal behaviors remain unclear. In this paper, we demonstrate that optogenetic activation of the motor neurons that induce muscle contraction can facilitate quantitative studies of muscle kinetics in C. elegans. To increase the throughput of the study, we trapped multiple worms in parallel in a microfluidic device and illuminated for photoactivation of channelrhodopsin-2 to induce contractions in body wall muscles. Using image processing, the change in body size was quantified over time. A total of five parameters including rate constants for contraction and relaxation were extracted from the optogenetic assay as descriptors of sarcomere functions. To potentially relate the genes encoding the sarcomeric proteins functionally, a hierarchical clustering analysis was conducted on the basis of those parameters. Because it assesses physiological output different from conventional assays, this method provides a complement to the phenotypic analysis of C. elegans muscle mutants currently performed in many labs; the clusters may provide new insights and drive new hypotheses for functional relationships among the many sarcomere components.

  4. Resolution of VTI anisotropy with elastic full-waveform inversion: theory and basic numerical examples

    NASA Astrophysics Data System (ADS)

    Podgornova, O.; Leaney, S.; Liang, L.

    2018-07-01

    Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.

  5. Structural, dielectric and magnetic properties of ZnFe2O4-Na0.5Bi0.5TiO3 multiferroic composites

    NASA Astrophysics Data System (ADS)

    Bhasin, Tanvi; Agarwal, Ashish; Sanghi, Sujata; Yadav, Manisha; Tuteja, Muskaan; Singh, Jogender; Rani, Sonia

    2018-04-01

    Multiferroic xNa0.5Bi0.5TiO3-(1-x)ZnFe2O4 (x=0.10, 0.20) composites were prepared by conventional solid state reaction method. Rietveld analysis of XRD data shows that samples exhibit both cubic (Fd-3m) and rhombohedral (R3c) crystal structure. Structural parameters and unit cell volume of samples vary with composition. The dielectric constant and dielectric loss (tanδ) display dispersion at low frequency due to space charge polarization and inhomogeneity in the composites. Magnetic analysis depicts the antiferromagnetic behavior of composites and magnetization is enhanced with the introduction of ferrite (ZnFe2O4) phase.

  6. Gradient pattern analysis applied to galaxy morphology

    NASA Astrophysics Data System (ADS)

    Rosa, R. R.; de Carvalho, R. R.; Sautter, R. A.; Barchi, P. H.; Stalder, D. H.; Moura, T. C.; Rembold, S. B.; Morell, D. R. F.; Ferreira, N. C.

    2018-06-01

    Gradient pattern analysis (GPA) is a well-established technique for measuring gradient bilateral asymmetries of a square numerical lattice. This paper introduces an improved version of GPA designed for galaxy morphometry. We show the performance of the new method on a selected sample of 54 896 objects from the SDSS-DR7 in common with Galaxy Zoo 1 catalogue. The results suggest that the second gradient moment, G2, has the potential to dramatically improve over more conventional morphometric parameters. It separates early- from late-type galaxies better (˜ 90 per cent) than the CAS system (C˜ 79 per cent, A˜ 50 per cent, S˜ 43 per cent) and a benchmark test shows that it is applicable to hundreds of thousands of galaxies using typical processing systems.

  7. Concentration analysis of breast tissue phantoms with terahertz spectroscopy

    PubMed Central

    Truong, Bao C. Q.; Fitzgerald, Anthony J.; Fan, Shuting; Wallace, Vincent P.

    2018-01-01

    Terahertz imaging has been previously shown to be capable of distinguishing normal breast tissue from its cancerous form, indicating its applicability to breast conserving surgery. The heterogeneous composition of breast tissue is among the main challenges to progressing this potential research towards a practical application. In this paper, two concentration analysis methods are proposed for analyzing phantoms mimicking breast tissue. The dielectric properties and the double Debye parameters were used to determine the phantom composition. The first method is wholly based on the conventional effective medium theory while the second one combines this theoretical model with empirical polynomial models. Through assessing the accuracy of these methods, their potential for application to quantifying breast tissue pathology was confirmed. PMID:29541525

  8. Gradient Pattern Analysis Applied to Galaxy Morphology

    NASA Astrophysics Data System (ADS)

    Rosa, R. R.; de Carvalho, R. R.; Sautter, R. A.; Barchi, P. H.; Stalder, D. H.; Moura, T. C.; Rembold, S. B.; Morell, D. R. F.; Ferreira, N. C.

    2018-04-01

    Gradient pattern analysis (GPA) is a well-established technique for measuring gradient bilateral asymmetries of a square numerical lattice. This paper introduces an improved version of GPA designed for galaxy morphometry. We show the performance of the new method on a selected sample of 54,896 objects from the SDSS-DR7 in common with Galaxy Zoo 1 catalog. The results suggest that the second gradient moment, G2, has the potential to dramatically improve over more conventional morphometric parameters. It separates early from late type galaxies better (˜90%) than the CAS system (C ˜ 79%, A ˜ 50%, S ˜ 43%) and a benchmark test shows that it is applicable to hundreds of thousands of galaxies using typical processing systems.

  9. Using the technique of computed tomography for nondestructive analysis of pharmaceutical dosage forms

    NASA Astrophysics Data System (ADS)

    de Oliveira, José Martins, Jr.; Mangini, F. Salvador; Carvalho Vila, Marta Maria Duarte; ViníciusChaud, Marco

    2013-05-01

    This work presents an alternative and non-conventional technique for evaluatingof physic-chemical properties of pharmaceutical dosage forms, i.e. we used computed tomography (CT) technique as a nondestructive technique to visualize internal structures of pharmaceuticals dosage forms and to conduct static and dynamical studies. The studies were conducted involving static and dynamic situations through the use of tomographic images, generated by the scanner at University of Sorocaba - Uniso. We have shown that through the use of tomographic images it is possible to conduct studies of porosity, densities, analysis of morphological parameters and performing studies of dissolution. Our results are in agreement with the literature, showing that CT is a powerful tool for use in the pharmaceutical sciences.

  10. Multivariate Meta-Analysis of Heterogeneous Studies Using Only Summary Statistics: Efficiency and Robustness

    PubMed Central

    Liu, Dungang; Liu, Regina; Xie, Minge

    2014-01-01

    Meta-analysis has been widely used to synthesize evidence from multiple studies for common hypotheses or parameters of interest. However, it has not yet been fully developed for incorporating heterogeneous studies, which arise often in applications due to different study designs, populations or outcomes. For heterogeneous studies, the parameter of interest may not be estimable for certain studies, and in such a case, these studies are typically excluded from conventional meta-analysis. The exclusion of part of the studies can lead to a non-negligible loss of information. This paper introduces a metaanalysis for heterogeneous studies by combining the confidence density functions derived from the summary statistics of individual studies, hence referred to as the CD approach. It includes all the studies in the analysis and makes use of all information, direct as well as indirect. Under a general likelihood inference framework, this new approach is shown to have several desirable properties, including: i) it is asymptotically as efficient as the maximum likelihood approach using individual participant data (IPD) from all studies; ii) unlike the IPD analysis, it suffices to use summary statistics to carry out the CD approach. Individual-level data are not required; and iii) it is robust against misspecification of the working covariance structure of the parameter estimates. Besides its own theoretical significance, the last property also substantially broadens the applicability of the CD approach. All the properties of the CD approach are further confirmed by data simulated from a randomized clinical trials setting as well as by real data on aircraft landing performance. Overall, one obtains an unifying approach for combining summary statistics, subsuming many of the existing meta-analysis methods as special cases. PMID:26190875

  11. A motion-constraint logic for moving-base simulators based on variable filter parameters

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.

    1974-01-01

    A motion-constraint logic for moving-base simulators has been developed that is a modification to the linear second-order filters generally employed in conventional constraints. In the modified constraint logic, the filter parameters are not constant but vary with the instantaneous motion-base position to increase the constraint as the system approaches the positional limits. With the modified constraint logic, accelerations larger than originally expected are limited while conventional linear filters would result in automatic shutdown of the motion base. In addition, the modified washout logic has frequency-response characteristics that are an improvement over conventional linear filters with braking for low-frequency pilot inputs. During simulated landing approaches of an externally blown flap short take-off and landing (STOL) transport using decoupled longitudinal controls, the pilots were unable to detect much difference between the modified constraint logic and the logic based on linear filters with braking.

  12. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  13. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  14. Determination techniques of Archie’s parameters: a, m and n in heterogeneous reservoirs

    NASA Astrophysics Data System (ADS)

    Mohamad, A. M.; Hamada, G. M.

    2017-12-01

    The determination of water saturation in a heterogeneous reservoir is becoming more challenging, as Archie’s equation is only suitable for clean homogeneous formation and Archie’s parameters are highly dependent on the properties of the rock. This study focuses on the measurement of Archie’s parameters in carbonate and sandstone core samples around Malaysian heterogeneous carbonate and sandstone reservoirs. Three techniques for the determination of Archie’s parameters a, m and n will be implemented: the conventional technique, core Archie parameter estimation (CAPE) and the three-dimensional regression technique (3D). By using the results obtained by the three different techniques, water saturation graphs were produced to observe the symbolic difference of Archie’s parameter and its relevant impact on water saturation values. The difference in water saturation values can be primarily attributed to showing the uncertainty level of Archie’s parameters, mainly in carbonate and sandstone rock samples. It is obvious that the accuracy of Archie’s parameters has a profound impact on the calculated water saturation values in carbonate sandstone reservoirs due to regions of high stress reducing electrical conduction resulting from the raised electrical heterogeneity of the heterogeneous carbonate core samples. Due to the unrealistic assumptions involved in the conventional method, it is better to use either the CAPE or 3D method to accurately determine Archie’s parameters in heterogeneous as well as homogeneous reservoirs.

  15. 13A. Integrative Cancer Care: The Life Over Cancer Model

    PubMed Central

    Block, Keith; Block, Penny; Shoham, Jacob

    2013-01-01

    Focus Areas: Integrative Algorithms of Care Integrative cancer treatment fully blends conventional cancer treatment with integrative therapies such as diet, supplements, exercise and biobehavioral approaches. The Life Over Cancer model comprises three spheres of intervention: improving lifestyle, improving biochemical environment (terrain), and improving tolerance of conventional treatment. These levels are applied within the context of a life-affirming approach to cancer patients and treatment. Clinical staff involved include MDs, psychosocial, physical therapy, and dietetic professionals, all located in a single private clinic, the Block Center for Integrative Cancer Treatment. This session will describe the rationale and operation of the clinical model. An outpatient chemotherapy unit is incorporated in the clinic. Chronomodulated chemotherapy is used for selected chemotherapy drugs. Physical care includes massage, exercise and other therapies as directed by the center's physical therapy staff. Notably, cancer patients who are physically active have lower mortality and recurrence risks. Behavioral approaches are being shown increasingly to affect physiological parameters and expression of genes potentially related to cancer. Thus, biobehavioral approaches such as relaxed abdominal breathing and comfort space imagery are taught to patients by a staff including a clinical psychologist and other practitioners. Nutrition is also emerging as a relevant area of intervention in cancer, with recent guidelines from the American Cancer Society. A team of registered dietitians counsels patients individually and conducts cooking classes. Finally, the Life Over Cancer model includes analysis of multiple biological parameters relevant to cancer and general health, with standard laboratory tests. Inflammation, glycemic variables, immune functioning and other variables are regularly monitored. Dietary changes and where necessary supplements are suggested when laboratory results are abnormal. Interaction of supplements with cancer treatment drugs is monitored. The session will highlight the roles played by both conventional and integrative therapies in the treatment of cancer patients.

  16. Evaluation of a Cloud Resolving Model Using TRMM Observations for Multiscale Modeling Applications

    NASA Technical Reports Server (NTRS)

    Posselt, Derek J.; L'Ecuyer, Tristan; Tao, Wei-Kuo; Hou, Arthur Y.; Stephens, Graeme L.

    2007-01-01

    The climate change simulation community is moving toward use of global cloud resolving models (CRMs), however, current computational resources are not sufficient to run global CRMs over the hundreds of years necessary to produce climate change estimates. As an intermediate step between conventional general circulation models (GCMs) and global CRMs, many climate analysis centers are embedding a CRM in each grid cell of a conventional GCM. These Multiscale Modeling Frameworks (MMFs) represent a theoretical advance over the use of conventional GCM cloud and convection parameterizations, but have been shown to exhibit an overproduction of precipitation in the tropics during the northern hemisphere summer. In this study, simulations of clouds, precipitation, and radiation over the South China Sea using the CRM component of the NASA Goddard MMF are evaluated using retrievals derived from the instruments aboard the Tropical Rainfall Measuring Mission (TRMM) satellite platform for a 46-day time period that spans 5 May - 20 June 1998. The NASA Goddard Cumulus Ensemble (GCE) model is forced with observed largescale forcing derived from soundings taken during the intensive observing period of the South China Sea Monsoon Experiment. It is found that the GCE configuration used in the NASA Goddard MMF responds too vigorously to the imposed large-scale forcing, accumulating too much moisture and producing too much cloud cover during convective phases, and overdrying the atmosphere and suppressing clouds during monsoon break periods. Sensitivity experiments reveal that changes to ice cloud microphysical parameters have a relatively large effect on simulated clouds, precipitation, and radiation, while changes to grid spacing and domain length have little effect on simulation results. The results motivate a more detailed and quantitative exploration of the sources and magnitude of the uncertainty associated with specified cloud microphysical parameters in the CRM components of MMFs.

  17. Feasibility of Obtaining Quantitative 3-Dimensional Information Using Conventional Endoscope: A Pilot Study

    PubMed Central

    Hyun, Jong Jin; Keum, Bora; Seo, Yeon Seok; Kim, Yong Sik; Jeen, Yoon Tae; Lee, Hong Sik; Um, Soon Ho; Kim, Chang Duck; Ryu, Ho Sang; Lim, Jong-Wook; Woo, Dong-Gi; Kim, Young-Joong; Lim, Myo-Taeg

    2012-01-01

    Background/Aims Three-dimensional (3D) imaging is gaining popularity and has been partly adopted in laparoscopic surgery or robotic surgery but has not been applied to gastrointestinal endoscopy. As a first step, we conducted an experiment to evaluate whether images obtained by conventional gastrointestinal endoscopy could be used to acquire quantitative 3D information. Methods Two endoscopes (GIF-H260) were used in a Borrmann type I tumor model made of clay. The endoscopes were calibrated by correcting the barrel distortion and perspective distortion. Obtained images were converted to gray-level image, and the characteristics of the images were obtained by edge detection. Finally, data on 3D parameters were measured by using epipolar geometry, two view geometry, and pinhole camera model. Results The focal length (f) of endoscope at 30 mm was 258.49 pixels. Two endoscopes were fixed at predetermined distance, 12 mm (d12). After matching and calculating disparity (v2-v1), which was 106 pixels, the calculated length between the camera and object (L) was 29.26 mm. The height of the object projected onto the image (h) was then applied to the pinhole camera model, and the result of H (height and width) was 38.21 mm and 41.72 mm, respectively. Measurements were conducted from 2 different locations. The measurement errors ranged from 2.98% to 7.00% with the current Borrmann type I tumor model. Conclusions It was feasible to obtain parameters necessary for 3D analysis and to apply the data to epipolar geometry with conventional gastrointestinal endoscope to calculate the size of an object. PMID:22977798

  18. Improved Determination of the Myelin Water Fraction in Human Brain using Magnetic Resonance Imaging through Bayesian Analysis of mcDESPOT

    PubMed Central

    Bouhrara, Mustapha; Spencer, Richard G.

    2015-01-01

    Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian methods as compared to the stochastic region contraction (SRC) implementation of NLLS. PMID:26499810

  19. New evaluation parameter for wearable thermoelectric generators

    NASA Astrophysics Data System (ADS)

    Wijethunge, Dimuthu; Kim, Woochul

    2018-04-01

    Wearable devices constitute a key application area for thermoelectric devices. However, owing to new constraints in wearable applications, a few conventional device optimization techniques are not appropriate and material evaluation parameters, such as figure of merit (zT) and power factor (PF), tend to be inadequate. We illustrated the incompleteness of zT and PF by performing simulations and considering different thermoelectric materials. The results indicate a weak correlation between device performance and zT and PF. In this study, we propose a new evaluation parameter, zTwearable, which is better suited for wearable applications compared to conventional zT. Owing to size restrictions, gap filler based device optimization is extremely critical in wearable devices. With respect to the occasions in which gap fillers are used, expressions for power, effective thermal conductivity (keff), and optimum load electrical ratio (mopt) are derived. According to the new parameters, the thermal conductivity of the material has become much more critical now. The proposed new evaluation parameter, namely, zTwearable, is extremely useful in the selection of an appropriate thermoelectric material among various candidates prior to the commencement of the actual design process.

  20. Effect of fiber addition on slow crack growth of a dental porcelain.

    PubMed

    de Araújo, Maico Dutra; Miranda, Ranulfo Benedito de Paula; Fredericci, Catia; Yoshimura, Humberto Naoyuki; Cesar, Paulo Francisco

    2015-04-01

    To evaluate the effect of the processing method (conventional sintering, S, and heat-pressing, HP) and addition of potassium titanate fibers, PTF, on the microstructure, mechanical properties (flexural strength, σf, and Weibull parameters, m and σ5%), slow crack growth parameters n (stress corrosion susceptibility coefficient), and optical properties (translucency parameter, TP, and opalescence index, OI) of a feldsphatic dental porcelain. Disks (n = 240, Ø12 × 1 mm) of porcelain (Vintage-Halo, Shofu) were produced using S and HP methods with and without addition of 10 wt% (conventional sintering) or 5 wt% (heat-pressing) of PTF. For the S method, porcelain was sintered in a conventional furnace. In the HP technique, refractory molds were produced by lost wax technique. The porcelain slurry was dry-pressed (3t/30s) to form a cylinder with 12 mm (diameter) and 20mm (height), which was heat-pressed for 5 min/3.5 bar into the mold. Specimens were tested for biaxial flexural strength in artificial saliva at 37°C. Weibull analysis was used to determine m and σ5%. Slow crack growth (SCG) parameters were determined by the dynamic fatigue test, and specimens were tested in biaxial flexure at five stress rates: 10(-2), 10(-1), 10(0), 10(1) and 10(2)MPa/s (n=10), immersed in artificial saliva at 37°C. Parameter n was calculated and statistically analyzed according to ASTM F394-78. Optical properties were determined in a spectrophotometer in the diffuse reflectance mode. The highest n value was obtained by the combination of heat-pressing with fiber addition (37.1) and this value was significantly higher than those obtained by both sintered groups (26.2 for control group and 27.7 for sintered with fiber). Although heat-pressing alone also resulted in higher n values compared to the sintered groups, there were no significant differences among them. Fiber addition had no effect on mechanical strength, but it resulted in decreased TP values and increased OI values for both processing methods. Heat-pressing alone was able to reduce the porosity level of the porcelain. Addition of PTF combined with heat-pressing can reduce strength degradation of a dental porcelain compared to sintered materials with or without fibers. Heat-pressing (HP) alone should be considered as a good alternative for clinical cases where high translucency is required. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. A comparative study of production performance and animal health practices in organic and conventional dairy systems.

    PubMed

    Silva, Jenevaldo B; Fagundes, Gisele M; Soares, João P G; Fonseca, Adivaldo H; Muir, James P

    2014-10-01

    Health and production management strategies influence environmental impacts of dairies. The objective of this paper was to measure risk factors on health and production parameters on six organic and conventional bovine, caprine, and ovine dairy herds in southeastern Brazil over six consecutive years (2006-2011). The organic operations had lower milk production per animal (P ≤ 0.05), lower calf mortality (P ≤ 0.05), less incidence of mastitis (P ≤ 0.05), fewer rates of spontaneous abortions (P ≤ 0.05), and reduced ectoparasite loads (P ≤ 0.05) compared to conventional herds and flocks. Organic herds, however, had greater prevalence of internal parasitism (P ≤ 0.05) than conventional herds. In all management systems, calves, kids, and lambs had greater oocyte counts than adults. However, calves in the organic group showed lower prevalence of coccidiosis. In addition, animals in the organic system exhibited lower parasitic resistance to anthelmintics. Herd genetic potential, nutritive value of forage, feed intake, and pasture parasite loads, however, may have influenced productive and health parameters. Thus, although conventional herds showed greater milk production and less disease prevalence, future research might quantify the potential implications of these unreported factors.

  2. Identifying product order with restricted Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Rao, Wen-Jia; Li, Zhenyu; Zhu, Qiong; Luo, Mingxing; Wan, Xin

    2018-03-01

    Unsupervised machine learning via a restricted Boltzmann machine is a useful tool in distinguishing an ordered phase from a disordered phase. Here we study its application on the two-dimensional Ashkin-Teller model, which features a partially ordered product phase. We train the neural network with spin configuration data generated by Monte Carlo simulations and show that distinct features of the product phase can be learned from nonergodic samples resulting from symmetry breaking. Careful analysis of the weight matrices inspires us to define a nontrivial machine-learning motivated quantity of the product form, which resembles the conventional product order parameter.

  3. A noninvasive method of examination of the hemostasis system.

    PubMed

    Kuznik, B I; Fine, I W; Kaminsky, A V

    2011-09-01

    We propose a noninvasive method of in vivo examination the hemostasis system based on speckle pattern analysis of coherent light scattering from the skin. We compared the results of measuring basic blood coagulation parameters by conventional invasive and noninvasive methods. A strict correlation was found between the results of measurement of soluble fibrin monomer complexes, international normalized ratio (INR), prothrombin index, and protein C content. The noninvasive method of examination of the hemostatic system enable rough evaluation of the intensity of the intravascular coagulation and correction of the dose of indirect anticoagulants maintaining desired values of INR or prothrombin index.

  4. Analysis and design of fiber-coupled high-power laser diode array

    NASA Astrophysics Data System (ADS)

    Zhou, Chongxi; Liu, Yinhui; Xie, Weimin; Du, Chunlei

    2003-11-01

    A conclusion that a single conventional optical system could not realize fiber coupled high-power laser diode array is drawn based on the BPP of laser beam. According to the parameters of coupled fiber, a method to couple LDA beams into a single multi-mode fiber including beams collimating, shaping, focusing and coupling is present. The divergence angles after collimating are calculated and analyzed; the shape equation of the collimating micro-lenses array is deprived. The focusing lens is designed. A fiber coupled LDA result with the core diameter of 800 um and numeric aperture of 0.37 is gotten.

  5. Analysis of the GPS Observations of the Site Survey at Sheshan 25-m Radio Telescope in August 2008

    NASA Technical Reports Server (NTRS)

    Liu, L.; Cheng, Z. Y.; Li, J. L.

    2010-01-01

    The processing of the GPS observations of the site survey at Sheshan 25-m radio telescope in August 2008 is reported. Because each session in this survey is only about six hours, not allowing the subdaily high frequency variations in the station coordinates to be reasonably smoothed, and because there are serious cycle slips in the observations and a large volume of data would be rejected during the software automatic adjustment of slips, the ordinary solution settings of GAMIT needed to be adjusted by loosening the constraints in the a priori coordinates to 10 m, adopting the "quick" mode in the solution iteration, and combining Cview manual operation with GAMIT automatic fixing of cycle slips. The resulting coordinates of the local control polygon in ITRF2005 are then compared with conventional geodetic results. Due to large rotations and translations in the two sets of coordinates (geocentric versus quasi-topocentric), the seven transformation parameters cannot be solved for directly. With various trial solutions it is shown that with a partial pre-removal of the large parameters, high precision transformation parameters can be obtained with post-fit residuals at the millimeter level. This analysis is necessary to prepare the follow-on site and transformation survey of the VLBI and SLR telescopes at Sheshan

  6. Investigation of skin structures based on infrared wave parameter indirect microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Liu, Xuefeng; Xiong, Jichuan; Zhou, Lijuan

    2017-02-01

    Detailed imaging and analysis of skin structures are becoming increasingly important in modern healthcare and clinic diagnosis. Nanometer resolution imaging techniques such as SEM and AFM can cause harmful damage to the sample and cannot measure the whole skin structure from the very surface through epidermis, dermis to subcutaneous. Conventional optical microscopy has the highest imaging efficiency, flexibility in onsite applications and lowest cost in manufacturing and usage, but its image resolution is too low to be accepted for biomedical analysis. Infrared parameter indirect microscopic imaging (PIMI) uses an infrared laser as the light source due to its high transmission in skins. The polarization of optical wave through the skin sample was modulated while the variation of the optical field was observed at the imaging plane. The intensity variation curve of each pixel was fitted to extract the near field polarization parameters to form indirect images. During the through-skin light modulation and image retrieving process, the curve fitting removes the blurring scattering from neighboring pixels and keeps only the field variations related to local skin structures. By using the infrared PIMI, we can break the diffraction limit, bring the wide field optical image resolution to sub-200nm, in the meantime of taking advantage of high transmission of infrared waves in skin structures.

  7. Automated region selection for analysis of dynamic cardiac SPECT data

    NASA Astrophysics Data System (ADS)

    Di Bella, E. V. R.; Gullberg, G. T.; Barclay, A. B.; Eisner, R. L.

    1997-06-01

    Dynamic cardiac SPECT using Tc-99m labeled teboroxime can provide kinetic parameters (washin, washout) indicative of myocardial blood flow. A time-consuming and subjective step of the data analysis is drawing regions of interest to delineate blood pool and myocardial tissue regions. The time-activity curves of the regions are then used to estimate local kinetic parameters. In this work, the appropriate regions are found automatically, in a manner similar to that used for calculating maximum count circumferential profiles in conventional static cardiac studies. The drawbacks to applying standard static circumferential profile methods are the high noise level and high liver uptake common in dynamic teboroxime studies. Searching along each ray for maxima to locate the myocardium does not typically provide useful information. Here we propose an iterative scheme in which constraints are imposed on the radii searched along each ray. The constraints are based on the shape of the time-activity curves of the circumferential profile members and on an assumption that the short axis slices are approximately circular. The constraints eliminate outliers and help to reduce the effects of noise and liver activity. Kinetic parameter estimates from the automatically generated regions were comparable to estimates from manually selected regions in dynamic canine teboroxime studies.

  8. Three Dimensional CFD Analysis of the GTX Combustor

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.; Bond, R. B.; Edwards, J. R.

    2002-01-01

    The annular combustor geometry of a combined-cycle engine has been analyzed with three-dimensional computational fluid dynamics. Both subsonic combustion and supersonic combustion flowfields have been simulated. The subsonic combustion analysis was executed in conjunction with a direct-connect test rig. Two cold-flow and one hot-flow results are presented. The simulations compare favorably with the test data for the two cold flow calculations; the hot-flow data was not yet available. The hot-flow simulation indicates that the conventional ejector-ramjet cycle would not provide adequate mixing at the conditions tested. The supersonic combustion ramjet flowfield was simulated with frozen chemistry model. A five-parameter test matrix was specified, according to statistical design-of-experiments theory. Twenty-seven separate simulations were used to assemble surrogate models for combustor mixing efficiency and total pressure recovery. ScramJet injector design parameters (injector angle, location, and fuel split) as well as mission variables (total fuel massflow and freestream Mach number) were included in the analysis. A promising injector design has been identified that provides good mixing characteristics with low total pressure losses. The surrogate models can be used to develop performance maps of different injector designs. Several complex three-way variable interactions appear within the dataset that are not adequately resolved with the current statistical analysis.

  9. Effects of sterilization treatments on the analysis of TOC in water samples.

    PubMed

    Shi, Yiming; Xu, Lingfeng; Gong, Dongqin; Lu, Jun

    2010-01-01

    Decomposition experiments conducted with and without microbial processes are commonly used to study the effects of environmental microorganisms on the degradation of organic pollutants. However, the effects of biological pretreatment (sterilization) on organic matter often have a negative impact on such experiments. Based on the principle of water total organic carbon (TOC) analysis, the effects of physical sterilization treatments on determination of TOC and other water quality parameters were investigated. The results revealed that two conventional physical sterilization treatments, autoclaving and 60Co gamma-radiation sterilization, led to the direct decomposition of some organic pollutants, resulting in remarkable errors in the analysis of TOC in water samples. Furthermore, the extent of the errors varied with the intensity and the duration of sterilization treatments. Accordingly, a novel sterilization method for water samples, 0.45 microm micro-filtration coupled with ultraviolet radiation (MCUR), was developed in the present study. The results indicated that the MCUR method was capable of exerting a high bactericidal effect on the water sample while significantly decreasing the negative impact on the analysis of TOC and other water quality parameters. Before and after sterilization treatments, the relative errors of TOC determination could be controlled to lower than 3% for water samples with different categories and concentrations of organic pollutants by using MCUR.

  10. Optimizing pressurized liquid extraction of microbial lipids using the response surface method.

    PubMed

    Cescut, J; Severac, E; Molina-Jouve, C; Uribelarrea, J-L

    2011-01-21

    Response surface methodology (RSM) was used for the determination of optimum extraction parameters to reach maximum lipid extraction yield with yeast. Total lipids were extracted from oleaginous yeast (Rhodotorula glutinis) using pressurized liquid extraction (PLE). The effects of extraction parameters on lipid extraction yield were studied by employing a second-order central composite design. The optimal condition was obtained as three cycles of 15 min at 100°C with a ratio of 144 g of hydromatrix per 100 g of dry cell weight. Different analysis methods were used to compare the optimized PLE method with two conventional methods (Soxhlet and modification of Bligh and Dyer methods) under efficiency, selectivity and reproducibility criteria thanks to gravimetric analysis, GC with flame ionization detector, High Performance Liquid Chromatography linked to Evaporative Light Scattering Detector (HPLC-ELSD) and thin-layer chromatographic analysis. For each sample, the lipid extraction yield with optimized PLE was higher than those obtained with referenced methods (Soxhlet and Bligh and Dyer methods with, respectively, a recovery of 78% and 85% compared to PLE method). Moreover, the use of PLE led to major advantages such as an analysis time reduction by a factor of 10 and solvent quantity reduction by 70%, compared with traditional extraction methods. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Effects of a new piezoelectric device on periosteal microcirculation after subperiosteal preparation.

    PubMed

    Stoetzer, Marcus; Felgenträger, Dörthe; Kampmann, Andreas; Schumann, Paul; Rücker, Martin; Gellrich, Nils-Claudius; von See, Constantin

    2014-07-01

    Subperiosteal preparation using a periosteal elevator leads to disturbances of local periosteal microcirculation. Soft-tissue damage can usually be considerably reduced using piezoelectric technology. For this reason, we investigated the effects of a novel piezoelectric device on local periosteal microcirculation and compared this approach with the conventional preparation of the periosteum using a periosteal elevator. A total of 20 Lewis rats were randomly assigned to one of two groups. Subperiosteal preparation was performed using either a piezoelectric device or a periosteal elevator. Intravital microscopy was performed immediately after the procedure as well as three and eight days postoperatively. Statistical analysis of microcirculatory parameters was performed offline using analysis of variance (ANOVA) on ranks (p<0.05). At all time points investigated, intravital microscopy demonstrated significantly higher levels of periosteal perfusion in the group of rats that underwent piezosurgery than in the group of rats that underwent treatment with a periosteal elevator. The use of a piezoelectric device for subperiosteal preparation is associated with better periosteal microcirculation than the use of a conventional periosteal elevator. As a result, piezoelectric devices can be expected to have a positive effect on bone metabolism. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Stress analysis of implant-bone fixation at different fracture angle

    NASA Astrophysics Data System (ADS)

    Izzawati, B.; Daud, R.; Afendi, M.; Majid, MS Abdul; Zain, N. A. M.; Bajuri, Y.

    2017-10-01

    Internal fixation is a mechanism purposed to maintain and protect the reduction of a fracture. Understanding of the fixation stability is necessary to determine parameters influence the mechanical stability and the risk of implant failure. A static structural analysis on a bone fracture fixation was developed to simulate and analyse the biomechanics of a diaphysis shaft fracture with a compression plate and conventional screws. This study aims to determine a critical area of the implant to be fractured based on different implant material and angle of fracture (i.e. 0°, 30° and 45°). Several factors were shown to influence stability to implant after surgical. The stainless steel, (S. S) and Titanium, (Ti) screws experienced the highest stress at 30° fracture angle. The fracture angle had a most significant effect on the conventional screw as compared to the compression plate. The stress was significantly higher in S.S material as compared to Ti material, with concentrated on the 4th screw for all range of fracture angle. It was also noted that the screws closest to the intense concentration stress areas on the compression plate experienced increasing amounts of stress. The highest was observed at the screw thread-head junction.

  13. Thermal Desorption Analysis of Effective Specific Soil Surface Area

    NASA Astrophysics Data System (ADS)

    Smagin, A. V.; Bashina, A. S.; Klyueva, V. V.; Kubareva, A. V.

    2017-12-01

    A new method of assessing the effective specific surface area based on the successive thermal desorption of water vapor at different temperature stages of sample drying is analyzed in comparison with the conventional static adsorption method using a representative set of soil samples of different genesis and degree of dispersion. The theory of the method uses the fundamental relationship between the thermodynamic water potential (Ψ) and the absolute temperature of drying ( T): Ψ = Q - aT, where Q is the specific heat of vaporization, and a is the physically based parameter related to the initial temperature and relative humidity of the air in the external thermodynamic reservoir (laboratory). From gravimetric data on the mass fraction of water ( W) and the Ψ value, Polyanyi potential curves ( W(Ψ)) for the studied samples are plotted. Water sorption isotherms are then calculated, from which the capacity of monolayer and the target effective specific surface area are determined using the BET theory. Comparative analysis shows that the new method well agrees with the conventional estimation of the degree of dispersion by the BET and Kutilek methods in a wide range of specific surface area values between 10 and 250 m2/g.

  14. Solar energy system economic evaluation for Solaron Akron, Akron, Ohio

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The economic analysis of the solar energy system that was installed at Akron, Ohio is developed for this and four other sites typical of a wide range of environmental and economic conditions. The analysis is accomplished based on the technical and economic models in the f chart design procedure with inputs based on the characteristics of the installed parameters of present worth of system cost over a projected twenty year life: life cycle savings, year of positive savings and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables is also investigated. Results show that only in Albuquerque, New Mexico, where insolation is 1828 Btu/sq ft/day and the conventional energy cost is high, is this solar energy system marginally profitable.

  15. Extending simulation modeling to activity-based costing for clinical procedures.

    PubMed

    Glick, N D; Blackmore, C C; Zelman, W N

    2000-04-01

    A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.

  16. Wound Healing Potential of Intermittent Negative Pressure under Limited Access Dressing in Burn Patients: Biochemical and Histopathological Study

    PubMed Central

    Muguregowda, Honnegowda Thittamaranahalli; Kumar, Pramod; Govindarama, Padmanabha Udupa Echalasara

    2018-01-01

    BACKGROUND Malondialdehyde (MDA) is an oxidant that causes damage to membranes, DNA, proteins, and lipids at the cellular level. Antioxidants minimize the effects of oxidants and thus help in formation of healthy granulation tissues with higher level of hydroxyproline and total protein. This study compared the effect of limited access dressing (LAD) with conventional closed dressing biochemically and histopathologically. METHODS Seventy-two 12-65 years old burn patients with mean wound size of 14 cm2 were divided to two groups of LAD (n=37), and conventional dressing groups (n=35). Various biochemical parameters were measured in granulation tissue. Histopathological analysis of the granulation tissue was studied too. RESULTS LAD group showed significant increase in hydroxyproline, total protein, GSH, and GPx and decrease in MDA levels compared to conventional dressing group. A significant negative correlation between GSH and MDA was noted in LAD group, but in conventional dressing group there was no significant correlation. A significant negative correlation between GPx and MDA was noticed in LAD group, but in conventional dressing group was not significant. There was a histologically fewer inflammatory cells, increased and well organized extracellular matrix deposit, more angiogenesis in LAD group after 10 days while the difference was significant between the groups. CONCLUSION Our study showed a significant reduction in oxidative stress biomarker of MDA, increase in hydroxyproline, total protein, antioxidants and amount of ECM deposition, number of blood vessels and a decrease in the amount of inflammatory cells and necrotic tissues in LAD group indicating the better healing effect of burn wounds. PMID:29651393

  17. Free Maillard Reaction Products in Milk Reflect Nutritional Intake of Glycated Proteins and Can Be Used to Distinguish "Organic" and "Conventionally" Produced Milk.

    PubMed

    Schwarzenbolz, Uwe; Hofmann, Thomas; Sparmann, Nina; Henle, Thomas

    2016-06-22

    Using LC-MS/MS and isotopically labeled standard substances, quantitation of free Maillard reaction products (MRPs), namely, N(ε)-(carboxymethyl)lysine (CML), 5-(hydroxymethyl)-1H-pyrrole-2-carbaldehyde (pyrraline, PYR), N(δ)-(5-hydro-5-methyl-4-imidazolon-2-yl)-ornithine (MG-H), and N(ε)-fructosyllysine (FL), in bovine milk was achieved. Considerable variations in the amounts of the individual MRPs were found, most likely as a consequence of the nutritional uptake of glycated proteins. When comparing commercial milk samples labeled as originating from "organic" or "conventional" farming, respectively, significant differences in the content of free PYR (organic milk, 20-300 pmol/mL; conventional milk, 400-1000 pmol/mL) were observed. An analysis of feed samples indicated that rapeseed and sugar beet are the main sources for MRPs in conventional farming. Furthermore, milk of different dairy animals (cow, buffalo, donkey, goat, ewe, mare, camel) as well as for the first time human milk was analyzed for free MRPs. The distribution of their concentrations, with FL and PYR as the most abundant in human milk and with a high individual variability, also points to a nutritional influence. As the components of concentrated feed do not belong to the natural food sources of ruminants and equidae, free MRPs in milk might serve as indicators for an adequate animal feeding in near-natural farming and can be suitable parameters to distinguish between an "organic" and "conventional" production method of milk.

  18. ENHANCING THE STABILITY OF POROUS CATALYSTS WITH SUPERCRITICAL REACTION MEDIA. (R826034)

    EPA Science Inventory

    Adsorption/desorption and pore-transport are key parameters influencing the activity and product selectivity in porous catalysts. With conventional reaction media (gas or liquid phase), one of these parameters is generally favorable while the other is not. For instance, while ...

  19. Characteristics of Extra Narrow Gap Weld of HSLA Steel Welded by Single-Seam per Layer Pulse Current GMA Weld Deposition

    NASA Astrophysics Data System (ADS)

    Agrawal, B. P.; Ghosh, P. K.

    2017-03-01

    Butt weld joints are produced using pulse current gas metal arc welding process by employing the technique of centrally laid multi-pass single-seam per layer weld deposition in extra narrow groove of thick HSLA steel plates. The weld joints are prepared by using different combination of pulse parameters. The selection of parameter of pulse current gas metal arc welding is done considering a summarized influence of simultaneously interacting pulse parameters defined by a dimensionless hypothetical factor ϕ. The effect of diverse pulse parameters on the characteristics of weld has been studied. Weld joint is also prepared by using commonly used multi-pass multi-seam per layer weld deposition in conventional groove. The extra narrow gap weld joints have been found much superior to the weld joint prepared by multi-pass multi-seam per layer deposition in conventional groove with respect to its metallurgical characteristics and mechanical properties.

  20. Geoelectrical inference of mass transfer parameters using temporal moments

    USGS Publications Warehouse

    Day-Lewis, Frederick D.; Singha, Kamini

    2008-01-01

    We present an approach to infer mass transfer parameters based on (1) an analytical model that relates the temporal moments of mobile and bulk concentration and (2) a bicontinuum modification to Archie's law. Whereas conventional geochemical measurements preferentially sample from the mobile domain, electrical resistivity tomography (ERT) is sensitive to bulk electrical conductivity and, thus, electrolytic solute in both the mobile and immobile domains. We demonstrate the new approach, in which temporal moments of collocated mobile domain conductivity (i.e., conventional sampling) and ERT‐estimated bulk conductivity are used to calculate heterogeneous mass transfer rate and immobile porosity fractions in a series of numerical column experiments.

  1. The case for regime-based water quality standards

    USGS Publications Warehouse

    Poole, Geoffrey C.; Dunham, J.B.; Keenan, D.M.; Sauter, S.T.; McCullough, D.A.; Mebane, Christopher; Lockwood, Jeffrey C.; Essig, Don A.; Hicks, Mark P.; Sturdevant, Debra J.; Materna, E.J.; Spalding, M.; Risley, John; Deppman, Marianne

    2004-01-01

    Conventional water quality standards have been successful in reducing the concentration of toxic substances in US waters. However, conventional standards are based on simple thresholds and are therefore poorly structured to address human-caused imbalances in dynamic, natural water quality parameters, such as nutrients, sediment, and temperature. A more applicable type of water quality standarda??a a??regime standarda??a??would describe desirable distributions of conditions over space and time within a stream network. By mandating the protection and restoration of the aquatic ecosystem dynamics that are required to support beneficial uses in streams, well-designed regime standards would facilitate more effective strategies for management of natural water quality parameters.

  2. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  3. Depth of focus enhancement of a modified imaging quasi-fractal zone plate.

    PubMed

    Zhang, Qinqin; Wang, Jingang; Wang, Mingwei; Bu, Jing; Zhu, Siwei; Gao, Bruce Z; Yuan, Xiaocong

    2012-10-01

    We propose a new parameter w for optimization of foci distribution of conventional fractal zone plates (FZPs) with a greater depth of focus (DOF) in imaging. Numerical simulations of DOF distribution on axis directions indicate that the values of DOF can be extended by a factor of 1.5 or more by a modified quasi-FZP. In experiments, we employ a simple object-lens-image-plane arrangement to pick up images at various positions within the DOF of a conventional FZP and a quasi-FZP, respectively. Experimental results show that the parameter w improves foci distribution of FZPs in good agreement with theoretical predictions.

  4. Depth of focus enhancement of a modified imaging quasi-fractal zone plate

    PubMed Central

    Zhang, Qinqin; Wang, Jingang; Wang, Mingwei; Bu, Jing; Zhu, Siwei; Gao, Bruce Z.; Yuan, Xiaocong

    2013-01-01

    We propose a new parameter w for optimization of foci distribution of conventional fractal zone plates (FZPs) with a greater depth of focus (DOF) in imaging. Numerical simulations of DOF distribution on axis directions indicate that the values of DOF can be extended by a factor of 1.5 or more by a modified quasi-FZP. In experiments, we employ a simple object–lens–image-plane arrangement to pick up images at various positions within the DOF of a conventional FZP and a quasi-FZP, respectively. Experimental results show that the parameter w improves foci distribution of FZPs in good agreement with theoretical predictions. PMID:24285908

  5. Normal tissue complication probability modelling of tissue fibrosis following breast radiotherapy

    NASA Astrophysics Data System (ADS)

    Alexander, M. A. R.; Brooks, W. A.; Blake, S. W.

    2007-04-01

    Cosmetic late effects of radiotherapy such as tissue fibrosis are increasingly regarded as being of importance. It is generally considered that the complication probability of a radiotherapy plan is dependent on the dose uniformity, and can be reduced by using better compensation to remove dose hotspots. This work aimed to model the effects of improved dose homogeneity on complication probability. The Lyman and relative seriality NTCP models were fitted to clinical fibrosis data for the breast collated from the literature. Breast outlines were obtained from a commercially available Rando phantom using the Osiris system. Multislice breast treatment plans were produced using a variety of compensation methods. Dose-volume histograms (DVHs) obtained for each treatment plan were reduced to simple numerical parameters using the equivalent uniform dose and effective volume DVH reduction methods. These parameters were input into the models to obtain complication probability predictions. The fitted model parameters were consistent with a parallel tissue architecture. Conventional clinical plans generally showed reducing complication probabilities with increasing compensation sophistication. Extremely homogenous plans representing idealized IMRT treatments showed increased complication probabilities compared to conventional planning methods, as a result of increased dose to areas receiving sub-prescription doses using conventional techniques.

  6. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  7. Energy System and Thermoeconomic Analysis of Combined Heat and Power High Temperature Proton Exchange Membrane Fuel Cell Systems for Light Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colella, Whitney G.; Pilli, Siva Prasad

    2015-06-01

    The United States (U.S.) Department of Energy (DOE)’s Pacific Northwest National Laboratory (PNNL) is spearheading a program with industry to deploy and independently monitor five kilowatt-electric (kWe) combined heat and power (CHP) fuel cell systems (FCSs) in light commercial buildings. This publication discusses results from PNNL’s research efforts to independently evaluate manufacturer-stated engineering, economic, and environmental performance of these CHP FCSs at installation sites. The analysis was done by developing parameters for economic comparison of CHP installations. Key thermodynamic terms are first defined, followed by an economic analysis using both a standard accounting approach and a management accounting approach. Keymore » economic and environmental performance parameters are evaluated, including (1) the average per unit cost of the CHP FCSs per unit of power, (2) the average per unit cost of the CHP FCSs per unit of energy, (3) the change in greenhouse gas (GHG) and air pollution emissions with a switch from conventional power plants and furnaces to CHP FCSs; (4) the change in GHG mitigation costs from the switch; and (5) the change in human health costs related to air pollution. From the power perspective, the average per unit cost per unit of electrical power is estimated to span a range from $15–19,000/ kilowatt-electric (kWe) (depending on site-specific changes in installation, fuel, and other costs), while the average per unit cost of electrical and heat recovery power varies between $7,000 and $9,000/kW. From the energy perspective, the average per unit cost per unit of electrical energy ranges from $0.38 to $0.46/kilowatt-hour-electric (kWhe), while the average per unit cost per unit of electrical and heat recovery energy varies from $0.18 to $0.23/kWh. These values are calculated from engineering and economic performance data provided by the manufacturer (not independently measured data). The GHG emissions were estimated to decrease by one-third by shifting from a conventional energy system to a CHP FCS system. The GHG mitigation costs were also proportional to the changes in the GHG gas emissions. Human health costs were estimated to decrease significantly with a switch from a conventional system to a CHP FCS system.« less

  8. A finite element analysis of a 3D auxetic textile structure for composite reinforcement

    NASA Astrophysics Data System (ADS)

    Ge, Zhaoyang; Hu, Hong; Liu, Yanping

    2013-08-01

    This paper reports the finite element analysis of an innovative 3D auxetic textile structure consisting of three yarn systems (weft, warp and stitch yarns). Different from conventional 3D textile structures, the proposed structure exhibits an auxetic behaviour under compression and can be used as a reinforcement to manufacture auxetic composites. The geometry of the structure is first described. Then a 3D finite element model is established using ANSYS software and validated by the experimental results. The deformation process of the structure at different compression strains is demonstrated, and the validated finite element model is finally used to simulate the auxetic behaviour of the structure with different structural parameters and yarn properties. The results show that the auxetic behaviour of the proposed structure increases with increasing compression strain, and all the structural parameters and yarn properties have significant effects on the auxetic behaviour of the structure. It is expected that the study could provide a better understanding of 3D auxetic textile structures and could promote their application in auxetic composites.

  9. TUNNEL LINING DESIGN METHOD BY FRAME STRUCTURE ANALYSIS USING GROUND REACTION CURVE

    NASA Astrophysics Data System (ADS)

    Sugimoto, Mitsutaka; Sramoon, Aphichat; Okazaki, Mari

    Both of NATM and shield tunnelling method can be applied to Diluvial and Neogene deposit, on which mega cities are located in Japan. Since the lining design method for both tunnelling methods are much different, the unified concept for tunnel lining design is expected. Therefore, in this research, a frame structure analysis model for tunnel lining design using the ground reaction curve was developed, which can take into account the earth pressure due to excavated surface displacement to active side including the effect of ground self-stabilization, and the excavated surface displacement before lining installation. Based on the developed model, a parameter study was carried out taking coefficient of subgrade reaction and grouting rate as a parameter, and the measured earth pressure acting on the lining at the site was compared with the calculated one by the developed model and the conventional model. As a result, it was confirmed that the developed model can represent earth pressure acting on the lining, lining displacement, and lining sectional force at ground ranging from soft ground to stiff ground.

  10. Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.

    PubMed

    Troshin, Peter V; Procter, James B; Barton, Geoffrey J

    2011-07-15

    JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.

  11. Ultrasound Assessment of Human Meniscus.

    PubMed

    Viren, Tuomas; Honkanen, Juuso T; Danso, Elvis K; Rieppo, Lassi; Korhonen, Rami K; Töyräs, Juha

    2017-09-01

    The aim of the present study was to evaluate the applicability of ultrasound imaging to quantitative assessment of human meniscus in vitro. Meniscus samples (n = 26) were harvested from 13 knee joints of non-arthritic human cadavers. Subsequently, three locations (anterior, center and posterior) from each meniscus were imaged with two ultrasound transducers (frequencies 9 and 40 MHz), and quantitative ultrasound parameters were determined. Furthermore, partial-least-squares regression analysis was applied for ultrasound signal to determine the relations between ultrasound scattering and meniscus integrity. Significant correlations between measured and predicted meniscus compositions and mechanical properties were obtained (R 2  = 0.38-0.69, p < 0.05). The relationship between conventional ultrasound parameters and integrity of the meniscus was weaker. To conclude, ultrasound imaging exhibited a potential for evaluation of meniscus integrity. Higher ultrasound frequency combined with multivariate analysis of ultrasound backscattering was found to be the most sensitive for evaluation of meniscus integrity. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. Quantitative fluorescence angiography for neurosurgical interventions.

    PubMed

    Weichelt, Claudia; Duscha, Philipp; Steinmeier, Ralf; Meyer, Tobias; Kuß, Julia; Cimalla, Peter; Kirsch, Matthias; Sobottka, Stephan B; Koch, Edmund; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    Present methods for quantitative measurement of cerebral perfusion during neurosurgical operations require additional technology for measurement, data acquisition, and processing. This study used conventional fluorescence video angiography--as an established method to visualize blood flow in brain vessels--enhanced by a quantifying perfusion software tool. For these purposes, the fluorescence dye indocyanine green is given intravenously, and after activation by a near-infrared light source the fluorescence signal is recorded. Video data are analyzed by software algorithms to allow quantification of the blood flow. Additionally, perfusion is measured intraoperatively by a reference system. Furthermore, comparing reference measurements using a flow phantom were performed to verify the quantitative blood flow results of the software and to validate the software algorithm. Analysis of intraoperative video data provides characteristic biological parameters. These parameters were implemented in the special flow phantom for experimental validation of the developed software algorithms. Furthermore, various factors that influence the determination of perfusion parameters were analyzed by means of mathematical simulation. Comparing patient measurement, phantom experiment, and computer simulation under certain conditions (variable frame rate, vessel diameter, etc.), the results of the software algorithms are within the range of parameter accuracy of the reference methods. Therefore, the software algorithm for calculating cortical perfusion parameters from video data presents a helpful intraoperative tool without complex additional measurement technology.

  13. Parametric study of waste chicken fat catalytic chemical vapour deposition for controlled synthesis of vertically aligned carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Suriani, A. B.; Dalila, A. R.; Mohamed, A.; Rosmi, M. S.; Mamat, M. H.; Malek, M. F.; Ahmad, M. K.; Hashim, N.; Isa, I. M.; Soga, T.; Tanemura, M.

    2016-12-01

    High-quality vertically aligned carbon nanotubes (VACNTs) were synthesised using ferrocene-chicken oil mixture utilising a thermal chemical vapour deposition (TCVD) method. Reaction parameters including vaporisation temperature, catalyst concentration and synthesis time were examined for the first time to investigate their influence on the growth of VACNTs. Analysis via field emission scanning electron microscopy and micro-Raman spectroscopy revealed that the growth rate, diameter and crystallinity of VACNTs depend on the varied synthesis parameters. Vaporisation temperature of 570°C, catalyst concentration of 5.33 wt% and synthesis time of 60 min were considered as optimum parameters for the production of VACNTs from waste chicken fat. These parameters are able to produce VACNTs with small diameters in the range of 15-30 nm and good quality (ID/IG 0.39 and purity 76%) which were comparable to those synthesised using conventional carbon precursor. The low turn on and threshold fields of VACNTs synthesised using optimum parameters indicated that the VACNTs synthesised using waste chicken fat are good candidate for field electron emitter. The result of this study therefore can be used to optimise the growth and production of VACNTs from waste chicken fat in a large scale for field emission application.

  14. A comparison between lesions found during meat inspection of finishing pigs raised under organic/free-range conditions and conventional, indoor conditions.

    PubMed

    Alban, Lis; Petersen, Jesper Valentin; Busch, Marie Erika

    2015-01-01

    It is often argued that pigs raised under less intensive production conditions - such as organic or free-range - have a higher level of animal welfare compared with conventionally raised pigs. To look into this, an analysis of data from a large Danish abattoir slaughtering organic, free-range, and conventionally raised finishing pigs was undertaken. First, the requirements for each of the three types of production systems were investigated. Next, meat inspection data from a period of 1 year were collected. These covered 201,160 organic/free-range pigs and 1,173,213 conventionally raised pigs. The prevalence of each individual type of lesion was calculated, followed by a statistical comparison between the prevalences in organic/free-range and conventional pigs. Because of the large number of data, the P-value for significance was lowered to P = 0.001, and only biological associations reflecting Odds Ratios above 1.2 or below 0.8 were considered to be of significance. The majority of the lesion types were recorded infrequently (<4%). Only chronic pleuritis was a common finding. A total of 13 lesion types were more frequent among organic/free-range pigs than among conventional pigs - among others old fractures, tail lesions and osteomyelitis. Four lesion types were equally frequent in the two groups: chronic pneumonia, chronic pleuritis, fresh fracture, and abscess in head/ear. Four lesion types were recorded less frequently among organic/free-range pigs compared with conventionally raised pigs. These included abscess in leg/toe, hernia and scar/hock lesion. Possible associations between the individual lesion types and the production systems - including the requirements for each system - are discussed. The results emphasize the importance of using direct animal based parameters when evaluating animal welfare in different types of production systems. Moreover, individual solutions to the health problems observed in a herd should be found, e.g. in collaboration with the veterinary practitioner and other advisors.

  15. Estimability of geodetic parameters from space VLBI observables

    NASA Technical Reports Server (NTRS)

    Adam, Jozsef

    1990-01-01

    The feasibility of space very long base interferometry (VLBI) observables for geodesy and geodynamics is investigated. A brief review of space VLBI systems from the point of view of potential geodetic application is given. A selected notational convention is used to jointly treat the VLBI observables of different types of baselines within a combined ground/space VLBI network. The basic equations of the space VLBI observables appropriate for convariance analysis are derived and included. The corresponding equations for the ground-to-ground baseline VLBI observables are also given for a comparison. The simplified expression of the mathematical models for both space VLBI observables (time delay and delay rate) include the ground station coordinates, the satellite orbital elements, the earth rotation parameters, the radio source coordinates, and clock parameters. The observation equations with these parameters were examined in order to determine which of them are separable or nonseparable. Singularity problems arising from coordinate system definition and critical configuration are studied. Linear dependencies between partials are analytically derived. The mathematical models for ground-space baseline VLBI observables were tested with simulation data in the frame of some numerical experiments. Singularity due to datum defect is confirmed.

  16. A new real-time guidance strategy for aerodynamic ascent flight

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takayuki; Kawaguchi, Jun'ichiro

    2007-12-01

    Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.

  17. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties

    PubMed Central

    Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Abstract Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing. PMID:29707073

  18. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties.

    PubMed

    Tan, Chaolin; Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm 3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV 0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing.

  19. Winter risk estimations through infrared cameras an principal component analysis

    NASA Astrophysics Data System (ADS)

    Marchetti, M.; Dumoulin, J.; Ibos, L.

    2012-04-01

    Thermal mapping has been implemented since the late eighties to measure road pavement temperature along with some other atmospheric parameters to establish a winter risk describing the susceptibility of road network to ice occurrence. Measurements are done using a vehicle circulating on the road network in various road weather conditions. When the dew point temperature drops below road surface temperature a risk of ice occurs and therefore a loss of grip risk for circulating vehicles. To avoid too much influence of the sun, and to see the thermal behavior of the pavement enhanced, thermal mapping is usually done before dawn during winter time. That is when the energy accumulated by the road during daytime is mainly dissipated (by radiation, by conduction and by convection) and before the road structure starts a new cycle. This analysis is mainly done when a new road network is built, or when some major pavement changes are made, or when modifications in the road surroundings took place that might affect the thermal heat balance. This helps road managers to install sensors to monitor road status on specific locations identified as dangerous, or simply to install specific road signs. Measurements are anyhow time-consuming. Indeed, a whole road network can hardly be analysed at once, and has to be partitioned in stretches that could be done in the open time window to avoid temperature artefacts due to a rising sun. The LRPC Nancy has been using a thermal mapping vehicle with now two infrared cameras. Road events were collected by the operator to help the analysis of the network thermal response. A conventional radiometer with appropriate performances was used as a reference. The objective of the work was to compare results from the radiometer and the cameras. All the atmospheric parameters measured by the different sensors such as air temperature and relative humidity were used as input parameters for the infrared camera when recording thermal images. Road thermal heterogeneities were clearly identified, while usually missed by a conventional radiometer. In the case presented here, the two lanes of the road could be properly observed. Promising perspectives appeared to increase the measurement rate. Furthermore, to cope with the climatic constraints of the winter measurements as to build a dynamic winter risk, a multivariate data analysis approach was implemented. Principal component analysis was performed and enabled to set up of dynamic thermal signature with a great agreement between statistical results and field measurements.

  20. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  1. Evaluation of bond strength of resin cements using different general-purpose statistical software packages for two-parameter Weibull statistics.

    PubMed

    Roos, Malgorzata; Stawarczyk, Bogna

    2012-07-01

    This study evaluated and compared Weibull parameters of resin bond strength values using six different general-purpose statistical software packages for two-parameter Weibull distribution. Two-hundred human teeth were randomly divided into 4 groups (n=50), prepared and bonded on dentin according to the manufacturers' instructions using the following resin cements: (i) Variolink (VAN, conventional resin cement), (ii) Panavia21 (PAN, conventional resin cement), (iii) RelyX Unicem (RXU, self-adhesive resin cement) and (iv) G-Cem (GCM, self-adhesive resin cement). Subsequently, all specimens were stored in water for 24h at 37°C. Shear bond strength was measured and the data were analyzed using Anderson-Darling goodness-of-fit (MINITAB 16) and two-parameter Weibull statistics with the following statistical software packages: Excel 2011, SPSS 19, MINITAB 16, R 2.12.1, SAS 9.1.3. and STATA 11.2 (p≤0.05). Additionally, the three-parameter Weibull was fitted using MNITAB 16. Two-parameter Weibull calculated with MINITAB and STATA can be compared using an omnibus test and using 95% CI. In SAS only 95% CI were directly obtained from the output. R provided no estimates of 95% CI. In both SAS and R the global comparison of the characteristic bond strength among groups is provided by means of the Weibull regression. EXCEL and SPSS provided no default information about 95% CI and no significance test for the comparison of Weibull parameters among the groups. In summary, conventional resin cement VAN showed the highest Weibull modulus and characteristic bond strength. There are discrepancies in the Weibull statistics depending on the software package and the estimation method. The information content in the default output provided by the software packages differs to very high extent. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  2. Finite element modelling of creep crack growth in 316 stainless and 9Cr-1Mo steels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnaswamy, P.; Brust, F.W.

    1994-09-01

    The failure behavior of steels under sustained and cyclic loads has been addressed. The constitutive behavior of the two steels have been represented by the conventional strain-hardening law and the Murakami-Ohno model for reversed and cyclic loads. The laws have been implemented into the research finite element code FVP. Post processors for FVP to calculate various path independent integral fracture parameters have been written. Compact tension C(T) specimens have been tested under sustained and cyclic loads with both the load point displacement and crack growth monitored during the tests. FE models with extremely refined meshes for the C(T) specimens weremore » prepared and the experiment simulated numerically. Results from this analysis focus on the differences between the various constitutive models as well as the fracture parameters in characterizing the creep crack growth of the two steels.« less

  3. IOTA: integration optimization, triage and analysis tool for the processing of XFEL diffraction images.

    PubMed

    Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T

    2016-06-01

    Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.

  4. Improving the accuracy of burn-surface estimation.

    PubMed

    Nichter, L S; Williams, J; Bryant, C A; Edlich, R F

    1985-09-01

    A user-friendly computer-assisted method of calculating total body surface area burned (TBSAB) has been developed. This method is more accurate, faster, and subject to less error than conventional methods. For comparison, the ability of 30 physicians to estimate TBSAB was tested. Parameters studied included the effect of prior burn care experience, the influence of burn size, the ability to accurately sketch the size of burns on standard burn charts, and the ability to estimate percent TBSAB from the sketches. Despite the ability for physicians of all levels of training to accurately sketch TBSAB, significant burn size over-estimation (p less than 0.01) and large interrater variability of potential consequence was noted. Direct benefits of a computerized system are many. These include the need for minimal user experience and the ability for wound-trend analysis, permanent record storage, calculation of fluid and caloric requirements, hemodynamic parameters, and the ability to compare meaningfully the different treatment protocols.

  5. Winner-take-all in a phase oscillator system with adaptation.

    PubMed

    Burylko, Oleksandr; Kazanovich, Yakov; Borisyuk, Roman

    2018-01-11

    We consider a system of generalized phase oscillators with a central element and radial connections. In contrast to conventional phase oscillators of the Kuramoto type, the dynamic variables in our system include not only the phase of each oscillator but also the natural frequency of the central oscillator, and the connection strengths from the peripheral oscillators to the central oscillator. With appropriate parameter values the system demonstrates winner-take-all behavior in terms of the competition between peripheral oscillators for the synchronization with the central oscillator. Conditions for the winner-take-all regime are derived for stationary and non-stationary types of system dynamics. Bifurcation analysis of the transition from stationary to non-stationary winner-take-all dynamics is presented. A new bifurcation type called a Saddle Node on Invariant Torus (SNIT) bifurcation was observed and is described in detail. Computer simulations of the system allow an optimal choice of parameters for winner-take-all implementation.

  6. Spectroscopic analysis and efficient diode-pumped 2.0 μm emission in Ho3+/Tm3+ codoped fluoride glass

    NASA Astrophysics Data System (ADS)

    Tian, Ying; Jing, Xufeng; Xu, Shiqing

    2013-11-01

    Intense 2.0 μm emission has been obtained in Ho3+/Tm3+ codoped ZBLAY glass pumped by common laser diode. Three intensity parameters and radiative properties have been determined from the absorption spectrum based on the Judd-Ofelt theory. The 2 μm emission characteristics and the energy transfer mechanism upon excitation of a conventional 800 nm laser diode are investigated. Efficient Tm3+ to Ho3+ energy transfer processes have been observed in present glass and investigated using steady-state and time-resolved optical spectroscopy measurement. The energy transfer microscopic parameter has been calculated with the Inokuti-Hirayama and Förster-Dexter models. High quantum efficiency of 2 μm emission (80.35%) and large energy transfer coefficient from Tm3+ to Ho3+ indicates this Ho3+/Tm3+ codoped ZBLAY glass is a promising material for 2.0 μm laser.

  7. Conditional parametric models for storm sewer runoff

    NASA Astrophysics Data System (ADS)

    Jonsdottir, H.; Nielsen, H. Aa; Madsen, H.; Eliasson, J.; Palsson, O. P.; Nielsen, M. K.

    2007-05-01

    The method of conditional parametric modeling is introduced for flow prediction in a sewage system. It is a well-known fact that in hydrological modeling the response (runoff) to input (precipitation) varies depending on soil moisture and several other factors. Consequently, nonlinear input-output models are needed. The model formulation described in this paper is similar to the traditional linear models like final impulse response (FIR) and autoregressive exogenous (ARX) except that the parameters vary as a function of some external variables. The parameter variation is modeled by local lines, using kernels for local linear regression. As such, the method might be referred to as a nearest neighbor method. The results achieved in this study were compared to results from the conventional linear methods, FIR and ARX. The increase in the coefficient of determination is substantial. Furthermore, the new approach conserves the mass balance better. Hence this new approach looks promising for various hydrological models and analysis.

  8. Recommended procedures for measuring aircraft noise and associated parameters

    NASA Technical Reports Server (NTRS)

    Marsh, A. H.

    1977-01-01

    Procedures are recommended for obtaining experimental values of aircraft flyover noise levels (and associated parameters). Specific recommendations are made for test criteria, instrumentation performance requirements, data-acquisition procedures, and test operations. The recommendations are based on state-of-the-art measurement capabilities available in 1976 and are consistent with the measurement objectives of the NASA Aircraft Noise Prediction Program. The recommendations are applicable to measurements of the noise produced by an airplane flying subsonically over (or past) microphones located near the surface of the ground. Aircraft types covered by the recommendations are fixed-wing airplanes powered by turbojet or turbofan engines and using conventional aerodynamic means for takeoff and landing. Various assumptions with respect to subsequent data processing and analysis were made (and are described) and the recommended measurement procedures are compatible with the assumptions. Some areas where additional research is needed relative to aircraft flyover noise measurement techniques are also discussed.

  9. Seasonal station variations in the Vienna VLBI terrestrial reference frame VieTRF16a

    NASA Astrophysics Data System (ADS)

    Krásná, Hana; Böhm, Johannes; Madzak, Matthias

    2017-04-01

    The special analysis center of the International Very Long Baseline Interferometry (VLBI) Service for Geodesy and Astrometry (IVS) at TU Wien (VIE) routinely analyses the VLBI measurements and estimates its own Terrestrial Reference Frame (TRF) solutions. We present our latest solution VieTRF16a (1979.0 - 2016.5) computed with the software VieVS version 3.0. Several recent updates of the software have been applied, e.g., the estimation of annual and semi-annual station variations as global parameters. The VieTRF16a is determined in the form of the conventional model (station position and its linear velocity) simultaneously with the celestial reference frame and Earth orientation parameters. In this work, we concentrate on the seasonal station variations in the residual time series and compare our TRF with the three combined TRF solutions ITRF2014, DTRF2014 and JTRF2014.

  10. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  11. Modulation indices for volumetric modulated arc therapy.

    PubMed

    Park, Jong Min; Park, So-Yeon; Kim, Hyoungnyoun; Kim, Jin Ho; Carlson, Joel; Ye, Sung-Joon

    2014-12-07

    The aim of this study is to present a modulation index (MI) for volumetric modulated arc therapy (VMAT) based on the speed and acceleration analysis of modulating-parameters such as multi-leaf collimator (MLC) movements, gantry rotation and dose-rate, comprehensively. The performance of the presented MI (MIt) was evaluated with correlation analyses to the pre-treatment quality assurance (QA) results, differences in modulating-parameters between VMAT plans versus dynamic log files, and differences in dose-volumetric parameters between VMAT plans versus reconstructed plans using dynamic log files. For comparison, the same correlation analyses were performed for the previously suggested modulation complexity score (MCS(v)), leaf travel modulation complexity score (LTMCS) and MI by Li and Xing (MI Li&Xing). In the two-tailed unpaired parameter condition, p values were acquired. The Spearman's rho (r(s)) values of MIt, MCSv, LTMCS and MI Li&Xing to the local gamma passing rate with 2%/2 mm criterion were -0.658 (p < 0.001), 0.186 (p = 0.251), 0.312 (p = 0.05) and -0.455 (p = 0.003), respectively. The values of rs to the modulating-parameter (MLC positions) differences were 0.917, -0.635, -0.857 and 0.795, respectively (p < 0.001). For dose-volumetric parameters, MIt showed higher statistically significant correlations than the conventional MIs. The MIt showed good performance for the evaluation of the modulation-degree of VMAT plans.

  12. Assessment of municipal solid waste settlement models based on field-scale data analysis.

    PubMed

    Bareither, Christopher A; Kwak, Seungbok

    2015-08-01

    An evaluation of municipal solid waste (MSW) settlement model performance and applicability was conducted based on analysis of two field-scale datasets: (1) Yolo and (2) Deer Track Bioreactor Experiment (DTBE). Twelve MSW settlement models were considered that included a range of compression behavior (i.e., immediate compression, mechanical creep, and biocompression) and range of total (2-22) and optimized (2-7) model parameters. A multi-layer immediate settlement analysis developed for Yolo provides a framework to estimate initial waste thickness and waste thickness at the end-of-immediate compression. Model application to the Yolo test cells (conventional and bioreactor landfills) via least squares optimization yielded high coefficient of determinations for all settlement models (R(2)>0.83). However, empirical models (i.e., power creep, logarithmic, and hyperbolic models) are not recommended for use in MSW settlement modeling due to potential non-representative long-term MSW behavior, limited physical significance of model parameters, and required settlement data for model parameterization. Settlement models that combine mechanical creep and biocompression into a single mathematical function constrain time-dependent settlement to a single process with finite magnitude, which limits model applicability. Overall, all models evaluated that couple multiple compression processes (immediate, creep, and biocompression) provided accurate representations of both Yolo and DTBE datasets. A model presented in Gourc et al. (2010) included the lowest number of total and optimized model parameters and yielded high statistical performance for all model applications (R(2)⩾0.97). Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Photoacoustic-fluorescence in vitro flow cytometry for quantification of absorption, scattering and fluorescence properties of the cells

    NASA Astrophysics Data System (ADS)

    Nedosekin, D. A.; Sarimollaoglu, M.; Foster, S.; Galanzha, E. I.; Zharov, V. P.

    2013-03-01

    Fluorescence flow cytometry is a well-established analytical tool that provides quantification of multiple biological parameters of cells at molecular levels, including their functional states, morphology, composition, proliferation, and protein expression. However, only the fluorescence and scattering parameters of the cells or labels are available for detection. Cell pigmentation, presence of non-fluorescent dyes or nanoparticles cannot be reliably quantified. Herewith, we present a novel photoacoustic (PA) flow cytometry design for simple integration of absorbance measurements into schematics of conventional in vitro flow cytometers. The integrated system allow simultaneous measurements of light absorbance, scattering and of multicolor fluorescence from single cells in the flow at rates up to 2 m/s. We compared various combinations of excitation laser sources for multicolor detection, including simultaneous excitation of PA and fluorescence using a single 500 kHz pulsed nanosecond laser. Multichannel detection scheme allows simultaneous detection of up to 8 labels, including 4 fluorescent tags and 4 PA colors. In vitro PA-fluorescence flow cytometer was used for studies of nanoparticles uptake and for the analysis of cell line pigmentation, including genetically encoded melanin expression in breast cancer cell line. We demonstrate that this system can be used for direct nanotoxicity studies with simultaneous quantification of nanoparticles content and assessment of cell viability using a conventional fluorescent apoptosis assays.

  14. Comparison of four specific dynamic office chairs with a conventional office chair: impact upon muscle activation, physical activity and posture.

    PubMed

    Ellegast, Rolf P; Kraft, Kathrin; Groenesteijn, Liesbeth; Krause, Frank; Berger, Helmut; Vink, Peter

    2012-03-01

    Prolonged and static sitting postures provoke physical inactivity at VDU workplaces and are therefore discussed as risk factors for the musculoskeletal system. Manufacturers have designed specific dynamic office chairs featuring structural elements which promote dynamic sitting and therefore physical activity. The aim of the present study was to evaluate the effects of four specific dynamic chairs on erector spinae and trapezius EMG, postures/joint angles and physical activity intensity (PAI) compared to those of a conventional standard office chair. All chairs were fitted with sensors for measurement of the chair parameters (backrest inclination, forward and sideward seat pan inclination), and tested in the laboratory by 10 subjects performing 7 standardized office tasks and by another 12 subjects in the field during their normal office work. Muscle activation revealed no significant differences between the specific dynamic chairs and the reference chair. Analysis of postures/joint angles and PAI revealed only a few differences between the chairs, whereas the tasks performed strongly affected the measured muscle activation, postures and kinematics. The characteristic dynamic elements of each specific chair yielded significant differences in the measured chair parameters, but these characteristics did not appear to affect the sitting dynamics of the subjects performing their office tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Braze Process Optimization Involving Conventional Metal/Ceramic Brazing with 50Au-50Cu Alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MALIZIA JR.,LOUIS A.; MEREDITH,KEITH W.; APPEL,DANIEL B.

    1999-12-15

    Numerous process variables can influence the robustness of conventional metal/ceramic brazing processes. Experience with brazing of hermetic vacuum components has identified the following parameters as influencing the outcome of hydrogen furnace brazed Kovar{trademark} to metallized alumina braze joints: (a) Mo-Mn metallization thickness, sinter fire temperature and porosity (b) Nil plate purity, thickness, and sinter firing conditions (c) peak process temperature, time above liquidus and (d) braze alloy washer thickness. ASTM F19 tensile buttons are being used to investigate the above parameters. The F19 geometry permits determination of both joint hermeticity and tensile strength. This presentation will focus on important lessonsmore » learned from the tensile button study: (A) the position of the Kovar{trademark} interlayer can influence the joint tensile strength achieved--namely, off-center interlayers can lead to residual stress development in the ceramic and degrade tensile strength values. Finite element analysis has been used to demonstrate the expected magnitude in strength degradation as a function of misalignment. (B) Time above liquidus (TAL) and peak temperature can influence the strength and alloying level of the resulting braze joint. Excessive TAL or peak temperatures can lead to overbraze conditions where all of the Ni plate is dissolved. (C) Metallize sinter fire processes can influence the morphology and strength obtained from the braze joints.« less

  16. Three-dimensional cathodoluminescence characterization of a semipolar GaInN based LED sample

    NASA Astrophysics Data System (ADS)

    Hocker, Matthias; Maier, Pascal; Tischer, Ingo; Meisch, Tobias; Caliebe, Marian; Scholz, Ferdinand; Mundszinger, Manuel; Kaiser, Ute; Thonke, Klaus

    2017-02-01

    A semipolar GaInN based light-emitting diode (LED) sample is investigated by three-dimensionally resolved cathodoluminescence (CL) mapping. Similar to conventional depth-resolved CL spectroscopy (DRCLS), the spatial resolution perpendicular to the sample surface is obtained by calibration of the CL data with Monte-Carlo-simulations (MCSs) of the primary electron beam scattering. In addition to conventional MCSs, we take into account semiconductor-specific processes like exciton diffusion and the influence of the band gap energy. With this method, the structure of the LED sample under investigation can be analyzed without additional sample preparation, like cleaving of cross sections. The measurement yields the thickness of the p-type GaN layer, the vertical position of the quantum wells, and a defect analysis of the underlying n-type GaN, including the determination of the free charge carrier density. The layer arrangement reconstructed from the DRCLS data is in good agreement with the nominal parameters defined by the growth conditions.

  17. Relation between ultrasonic properties, rheology and baking quality for bread doughs of widely differing formulation.

    PubMed

    Peressini, Donatella; Braunstein, Dobrila; Page, John H; Strybulevych, Anatoliy; Lagazio, Corrado; Scanlon, Martin G

    2017-06-01

    The objective was to evaluate whether an ultrasonic reflectance technique has predictive capacity for breadmaking performance of doughs made under a wide range of formulation conditions. Two flours of contrasting dough strength augmented with different levels of ingredients (inulin, oil, emulsifier or salt) were used to produce different bread doughs with a wide range of properties. Breadmaking performance was evaluated by conventional large-strain rheological tests on the dough and by assessment of loaf quality. The ultrasound tests were performed with a broadband reflectance technique in the frequency range of 0.3-6 MHz. Principal component analysis showed that ultrasonic attenuation and phase velocity at frequencies between 0.3 and 3 MHz are good predictors for rheological and bread scoring characteristics. Ultrasonic parameters had predictive capacity for breadmaking performance for a wide range of dough formulations. Lower frequency attenuation coefficients correlated well with conventional quality indices of both the dough and the bread. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  18. Finite Element Analysis of a Copper Single Crystal Shape Memory Alloy-Based Endodontic Instruments

    NASA Astrophysics Data System (ADS)

    Vincent, Marin; Thiebaud, Frédéric; Bel Haj Khalifa, Saifeddine; Engels-Deutsch, Marc; Ben Zineb, Tarak

    2015-10-01

    The aim of the present paper is the development of endodontic Cu-based single crystal Shape Memory Alloy (SMA) instruments in order to eliminate the antimicrobial and mechanical deficiencies observed with the conventional Nickel-Titane (NiTi) SMA files. A thermomechanical constitutive law, already developed and implemented in a finite element code by our research group, is adopted for the simulation of the single crystal SMA behavior. The corresponding material parameters were identified starting from experimental results for a tensile test at room temperature. A computer-aided design geometry has been achieved and considered for a finite element structural analysis of the endodontic Cu-based single crystal SMA files. They are meshed with tetrahedral continuum elements to improve the computation time and the accuracy of results. The geometric parameters tested in this study are the length of the active blade, the rod length, the pitch, the taper, the tip diameter, and the rod diameter. For each set of adopted parameters, a finite element model is built and tested in a combined bending-torsion loading in accordance with ISO 3630-1 norm. The numerical analysis based on finite element procedure allowed purposing an optimal geometry suitable for Cu-based single crystal SMA endodontic files. The same analysis was carried out for the classical NiTi SMA files and a comparison was made between the two kinds of files. It showed that Cu-based single crystal SMA files are less stiff than the NiTi files. The Cu-based endodontic files could be used to improve the root canal treatments. However, the finite element analysis brought out the need for further investigation based on experiments.

  19. Ultrasound texture analysis: Association with lymph node metastasis of papillary thyroid microcarcinoma.

    PubMed

    Kim, Soo-Yeon; Lee, Eunjung; Nam, Se Jin; Kim, Eun-Kyung; Moon, Hee Jung; Yoon, Jung Hyun; Han, Kyung Hwa; Kwak, Jin Young

    2017-01-01

    This retrospective study aimed to evaluate whether ultrasound texture analysis is useful to predict lymph node metastasis in patients with papillary thyroid microcarcinoma (PTMC). This study was approved by the Institutional Review Board, and the need to obtain informed consent was waived. Between May and July 2013, 361 patients (mean age, 43.8 ± 11.3 years; range, 16-72 years) who underwent staging ultrasound (US) and subsequent thyroidectomy for conventional PTMC ≤ 10 mm between May and July 2013 were included. Each PTMC was manually segmented and its histogram parameters (Mean, Standard deviation, Skewness, Kurtosis, and Entropy) were extracted with Matlab software. The mean values of histogram parameters and clinical and US features were compared according to lymph node metastasis using the independent t-test and Chi-square test. Multivariate logistic regression analysis was performed to identify the independent factors associated with lymph node metastasis. Tumors with lymph node metastasis (n = 117) had significantly higher entropy compared to those without lymph node metastasis (n = 244) (mean±standard deviation, 6.268±0.407 vs. 6.171±.0.405; P = .035). No additional histogram parameters showed differences in mean values according to lymph node metastasis. Entropy was not independently associated with lymph node metastasis on multivariate logistic regression analysis (Odds ratio, 0.977 [95% confidence interval (CI), 0.482-1.980]; P = .949). Younger age (Odds ratio, 0.962 [95% CI, 0.940-0.984]; P = .001) and lymph node metastasis on US (Odds ratio, 7.325 [95% CI, 3.573-15.020]; P < .001) were independently associated with lymph node metastasis. Texture analysis was not useful in predicting lymph node metastasis in patients with PTMC.

  20. Cluster analysis of quantitative parametric maps from DCE-MRI: application in evaluating heterogeneity of tumor response to antiangiogenic treatment.

    PubMed

    Longo, Dario Livio; Dastrù, Walter; Consolino, Lorena; Espak, Miklos; Arigoni, Maddalena; Cavallo, Federica; Aime, Silvio

    2015-07-01

    The objective of this study was to compare a clustering approach to conventional analysis methods for assessing changes in pharmacokinetic parameters obtained from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) during antiangiogenic treatment in a breast cancer model. BALB/c mice bearing established transplantable her2+ tumors were treated with a DNA-based antiangiogenic vaccine or with an empty plasmid (untreated group). DCE-MRI was carried out by administering a dose of 0.05 mmol/kg of Gadocoletic acid trisodium salt, a Gd-based blood pool contrast agent (CA) at 1T. Changes in pharmacokinetic estimates (K(trans) and vp) in a nine-day interval were compared between treated and untreated groups on a voxel-by-voxel analysis. The tumor response to therapy was assessed by a clustering approach and compared with conventional summary statistics, with sub-regions analysis and with histogram analysis. Both the K(trans) and vp estimates, following blood-pool CA injection, showed marked and spatial heterogeneous changes with antiangiogenic treatment. Averaged values for the whole tumor region, as well as from the rim/core sub-regions analysis were unable to assess the antiangiogenic response. Histogram analysis resulted in significant changes only in the vp estimates (p<0.05). The proposed clustering approach depicted marked changes in both the K(trans) and vp estimates, with significant spatial heterogeneity in vp maps in response to treatment (p<0.05), provided that DCE-MRI data are properly clustered in three or four sub-regions. This study demonstrated the value of cluster analysis applied to pharmacokinetic DCE-MRI parametric maps for assessing tumor response to antiangiogenic therapy. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. A comparative analysis of conventional cytopreparatory and liquid based cytological techniques (Sure Path) in evaluation of serous effusion fluids.

    PubMed

    Dadhich, Hrishikesh; Toi, Pampa Ch; Siddaraju, Neelaiah; Sevvanthi, Kalidas

    2016-11-01

    Clinically, detection of malignant cells in serous body fluids is critical, as their presence implies the upstaging of the disease. Cytology of body cavity fluids serves as an important tool when other diagnostic tests cannot be performed. In most laboratories, currently, the effusion fluid samples are analysed chiefly by the conventional cytopreparatory (CCP) technique. Although, there are several studies comparing the liquid-based cytology (LBC), with CCP technique in the field of cervicovaginal cytology; the literature on such comparison with respect to serous body fluid examination is sparse. One hundred samples of serous body fluids were processed by both CCP and LBC techniques. Slides prepared by these techniques were studied using six parameters. A comparative analysis of the advantages and disadvantages of the techniques in detection of malignant cells was carried out with appropriate statistical tests. The samples comprised 52 pleural, 44 peritoneal and four pericardial fluids. No statistically significant difference was noted with respect to cellularity (P values = 0.22), cell distribution (P values = 0.39) and diagnosis of malignancy (P values = 0.20). As for the remaining parameters, LBC provided statistically significant clearer smear background (P values < 0.0001) and shorter screening time (P values < 0.0001), while CPP technique provided a significantly better staining quality (P values 0.01) and sharper cytomorphologic features (P values 0.05). Although, a reduced screening time and clearer smear background are the two major advantages of LBC; the CCP technique provides the better staining quality with sharper cytomorphologic features which is more critical from the cytologic interpretation point of view. Diagn. Cytopathol. 2016;44:874-879. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Influence of denture adhesives on occlusion and disocclusion times.

    PubMed

    Abdelnabi, Mohamed Hussein; Swelem, Amal Ali; Al-Dharrab, Ayman A

    2016-03-01

    The effectiveness of adhesives in enhancing several functional aspects of complete denture performance has been well established. The direct influence of adhesives on occlusal contact simultaneity has not yet been investigated. The purpose of this crossover clinical trial was to evaluate quantitatively the influence of adhesives on occlusal balance by recording timed occlusal contacts; namely occlusion time (OT) and disocclusion time during right (DT-right) and left (DT-left) excursions by using computerized occlusal analysis. A crossover clinical trial was adopted. Assessments were carried out while participants (n=49) wore their dentures first without then with adhesives. Computerized occlusal analysis using the T-Scan III system was conducted to perform baseline computer-guided occlusal adjustment for conventionally fabricated dentures. Retention and stability assessment using the modified Kapur index and recording of OT and DT-right and DT-left values using the T-Scan III were subsequently carried out for all dentures, first without adhesives and then after application of adhesive. All T-Scan procedures were carried out by the same clinician. Wilcoxon signed-rank test was used to analyze the Kapur index scores and occlusal parameters (α=.05). Stability and retention of conventional dentures ranged initially from good to very good. However, adhesive application resulted in significant improvement (P<.001) in stability and retention and a significant decrease in duration of all occlusal parameters (OT [P=.003], DT-right [P=.003], and DT-left [P=.008]). Adhesives significantly decreased OT and DT durations in initially well-fitting complete dentures with fairly well balanced occlusion, and further enhanced denture stability and occlusal contact simultaneity. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  3. Validation of a New Small-Volume Sodium Citrate Collection Tube for Coagulation Testing in Critically Ill Patients with Coagulopathy.

    PubMed

    Adam, Elisabeth H; Zacharowski, Kai; Hintereder, Gudrun; Raimann, Florian; Meybohm, Patrick

    2018-06-01

    Blood loss due to phlebotomy leads to hospital-acquired anemia and more frequent blood transfusions that may be associated with increased risk of morbidity and mortality in critically ill patients. Multiple blood conservation strategies have been proposed in the context of patient blood management to minimize blood loss. Here, we evaluated a new small-volume sodium citrate collection tube for coagulation testing in critically ill patients. In 46 critically adult ill patients admitted to an interdisciplinary intensive care unit, we prospectively compared small-volume (1.8 mL) sodium citrate tubes with the conventional (3 mL) sodium citrate tubes. The main inclusion criterium was a proven coagulopathy (Quick < 60% and/or aPTT > 40 second) due to anticoagulation therapy or perioperative coagulopathy. In total, 92 coagulation analyses were obtained. Linear correlation analysis detected a positive relationship for 7 coagulation parameters (Prothrombin Time, r = 0.987; INR, r = 0.985; activated Partial Thromboplastin Time, r = 0.967; Thrombin Clotting Time, r = 0.969; Fibrinogen, r = 0.986; Antithrombin, r = 0.988; DDimer, r = 0.969). Bland-Altman analyses revealed an absolute mean of differences of almost zero. Ninety-five percent of data were within two standard deviations of the mean difference suggesting interchangeability. As systematic deviations between measured parameters of the two tubes were very unlikely, test results of small-volume (1.8 mL) sodium citrate tubes were equal to conventional (3 mL) sodium citrate tubes and can be considered interchangeable. Small-volume sodium citrate tubes reduced unnecessary diagnostic-related blood loss by about 40% and, therefore, should be the new standard of care for routine coagulation analysis in critically ill patients.

  4. Evaluation of Left Ventricular Diastolic Dysfunction with Early Systolic Dysfunction Using Two-Dimensional Speckle Tracking Echocardiography in Canine Heart Failure Model.

    PubMed

    Wu, Wei-Chun; Ma, Hong; Xie, Rong-Ai; Gao, Li-Jian; Tang, Yue; Wang, Hao

    2016-04-01

    This study evaluated the role of two-dimensional speckle tracking echocardiography (2DSTE) for predicting left ventricular (LV) diastolic dysfunction in pacing-induced canine heart failure. Pacing systems were implanted in 8 adult mongrel dogs, and continuous rapid right ventricular pacing (RVP, 240 beats/min) was maintained for 2 weeks. The obtained measurements from 2DSTE included global strain rate during early diastole (SRe) and during late diastole (SRa) in the longitudinal (L-SRe, L-SRa), circumferential (C-SRe, C-SRa), and radial directions (R-SRe, R-SRa). Changes in heart morphology were observed by light microscopy and transmission electron microscopy at 2 weeks. The onset of LV diastolic dysfunction with early systolic dysfunction occurred 3 days after RVP initiation. Most of the strain rate imaging indices were altered at 1 or 3 days after RVP onset and continued to worsen until heart failure developed. Light and transmission electron microscopy showed myocardial vacuolar degeneration and mitochondrial swelling in the left ventricular at 2 weeks after RVP onset. Pearson's correlation analysis revealed that parameters of conventional echocardiography and 2DSTE showed moderate correlation with LV pressure parameters, including E/Esep' (r = 0.58, P < 0.01), L-SRe (r = -0.58, P < 0.01), E/L-SRe (r = 0.65, P < 0.01), and R-SRe (r = 0.53, P < 0.01). ROC curves analysis showed that these indices of conventional echocardiography and strain rate imaging could effectively predict LV diastolic dysfunction (area under the curve: E/Esep' 0.78; L-SRe 0.84; E/L-SRe 0.80; R-SRe 0.80). 2DSTE was a sensitive and accurate technique that could be used for predicting LV diastolic dysfunction in canine heart failure model. © 2015, Wiley Periodicals, Inc.

  5. Stochastic approach to data analysis in fluorescence correlation spectroscopy.

    PubMed

    Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo

    2006-09-21

    Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).

  6. Two-dimensional speckle tracking echocardiography prognostic parameters in patients after acute myocardial infarction.

    PubMed

    Haberka, Maciej; Liszka, Jerzy; Kozyra, Andrzej; Finik, Maciej; Gąsior, Zbigniew

    2015-03-01

    The aim of the study was to evaluate the left ventricle (LV) function with speckle tracking echocardiography (STE) and to assess its relation to prognosis in patients after acute myocardial infarction (AMI). Sixty-three patients (F/M = 16/47 pts; 62.33 ± 11.85 years old) with AMI (NSTEMI/STEMI 24/39 pts) and successful percutaneous coronary intervention (PCI) with stent implantation (thrombolysis in myocardial infarction; TIMI 3 flow) were enrolled in this study. All patients underwent baseline two-dimensional conventional echocardiography and STE 3 days (baseline) and 30 days after PCI. All patients were followed up for cardiovascular clinical endpoints, major adverse cardiovascular endpoint (MACE), and functional status (Canadian Cardiovascular Society and New York Heart Association). During the follow-up (31.9 ± 5.1 months), there were 3 cardiovascular deaths, 15 patients had AMI, 2 patients had cerebral infarction, 24 patients reached the MACE. Baseline LV torsion (P = 0.035), but none of the other strain parameters were associated with the time to first unplanned cardiovascular hospitalization. Univariate analysis showed that baseline longitudinal two-chamber and four-chamber strain (sLa2 0 and sLa4 0) and the same parameters obtained 30 days after the AMI together with transverse four-chamber strain (sLa2 30, sLa4 30, and sTa4 30) were significantly associated with combined endpoint (MACE). The strongest association in the univariate analysis was found for the baseline sLa2. However, in multivariable analysis only a left ventricular remodeling (LVR - 27% pts) was significantly associated with MACE and strain parameters were not associated with the combined endpoint. The assessment of LV function with STE may improve cardiovascular risk prediction in postmyocardial infarction patients. © 2014, Wiley Periodicals, Inc.

  7. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy.

    PubMed

    Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L

    2017-05-07

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  8. Does the Fuhrman or World Health Organization/International Society of Urological Pathology Grading System Apply to the Xp11.2 Translocation Renal Cell Carcinoma?: A 10-Year Single-Center Study.

    PubMed

    Liu, Ning; Gan, Weidong; Qu, Feng; Wang, Zhen; Zhuang, Wenyuan; Agizamhan, Sezim; Xu, Linfeng; Yin, Juanjuan; Guo, Hongqian; Li, Dongmei

    2018-04-01

    The Fuhrman and World Health Organization/International Society of Urological Pathology (WHO/ISUP) grading systems are widely used to predict survival for patients with conventional renal cell carcinoma. To determine the validity of nuclear grading systems (both the Fuhrman and the WHO/ISUP) and the individual components of the Fuhrman grading system in predicting the prognosis of Xp11.2 translocation renal cell carcinoma (Xp11.2 tRCC), we identified and followed up 47 patients with Xp11.2 tRCC in our center from January 2007 to June 2017. The Fuhrman and WHO/ISUP grading was reassigned by two pathologists. Nuclear size and shape were determined for each case based on the greatest degree of nuclear pleomorphism using image analysis software. Univariate and multivariate analyses were performed to evaluate the capacity of the grading systems and nuclear parameters to predict overall survival and progression-free survival. On univariate Cox regression analysis, the parameters of nuclear size were associated significantly with overall survival and progression-free survival, whereas the grading systems and the parameters of nuclear shape failed to reach a significant correlation. On multivariate analysis, however, none of the parameters was associated independently with survival. Our findings indicate that neither the Fuhrman nor the WHO/ISUP grading system is applicable to Xp11.2 tRCC. The assessment of nuclear size instead may be novel outcome predictors for patients with Xp11.2 tRCC. Copyright © 2018 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.

  9. An automated form of video image analysis applied to classification of movement disorders.

    PubMed

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis.

  10. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy

    NASA Astrophysics Data System (ADS)

    Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.

    2017-05-01

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  11. The structure and dynamics of tornado-like vortices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nolan, D.S.; Farrell, B.F.

    The structure and dynamics of axisymmetric tornado-like vortices are explored with a numerical model of axisymmetric incompressible flow based on recently developed numerical methods. The model is first shown to compare favorably with previous results and is then used to study the effects of varying the major parameters controlling the vortex: the strength of the convective forcing, the strength of the rotational forcing, and the magnitude of the model eddy viscosity. Dimensional analysis of the model problem indicates that the results must depend on only two dimensionless parameters. The natural choices for these two parameters are a convective Reynolds numbermore » (based on the velocity scale associated with the convective forcing) and a parameter analogous to the swirl ratio in laboratory models. However, by examining sets of simulations with different model parameters it is found that a dimensionless parameter known as the vortex Reynolds number, which is the ratio of the far-field circulation to the eddy viscosity, is more effective than the convention swirl ratio for predicting the structure of the vortex. The parameter space defined by the choices for model parameters is further explored with large sets of numerical simulations. For much of this parameter space it is confirmed that the vortex structure and time-dependent behavior depend strongly on the vortex Reynolds number and only weakly on the convective Reynolds number. The authors also find that for higher convective Reynolds numbers, the maximum possible wind speed increases, and the rotational forcing necessary to achieve that wind speed decreases. Physical reasoning is used to explain this behavior, and implications for tornado dynamics are discussed.« less

  12. Surface Modification of Micro-Alloyed High-Strength Low-Alloy Steel by Controlled TIG Arcing Process

    NASA Astrophysics Data System (ADS)

    Ghosh, P. K.; Kumar, Ravindra

    2015-02-01

    Surface modification of micro-alloyed HSLA steel plate has been carried out by autogenous conventional and pulse current tungsten inert gas arcing (TIGA) processes at different welding parameters while the energy input was kept constant. At a given energy input the influence of pulse parameters on the characteristics of surface modification has been studied in case of employing single and multi-run procedure. The role of pulse parameters has been studied by considering their summarized influence defined by a factor Φ. The variation in Φ and pulse frequency has been found to significantly affect the thermal behavior of fusion and accordingly the width and penetration of the modified region along with its microstructure, hardness and wear characteristics. It is found that pulsed TIGA is relatively more advantageous over the conventional TIGA process, as it leads to higher hardness, improved wear resistance, and a better control over surface characteristics.

  13. Considering the influence of stimulation parameters on the effect of conventional and high-definition transcranial direct current stimulation.

    PubMed

    To, Wing Ting; Hart, John; De Ridder, Dirk; Vanneste, Sven

    2016-01-01

    Recently, techniques to non-invasively modulate specific brain areas gained popularity in the form of transcranial direct current stimulation (tDCS) and high-definition transcranial direct current stimulation. These non-invasive techniques have already shown promising outcomes in various studies with healthy subjects as well as patient populations. Despite widespread dissemination of tDCS, there remain significant unknowns about the influence of a diverse number of tDCS parameters (e.g. polarity, size, position of electrodes & duration of stimulation) in inducing neurophysiological and behavioral effects. This article explores both techniques starting with the history of tDCS, to the differences between conventional tDCS and high-definition transcranial direct current stimulation, the underlying physiological mechanism, the (in)direct effects, the applications of tDCS with varying parameters, the efficacy, the safety issues and the opportunities for future research.

  14. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  15. Determination of the protonation state of the Asp dyad: conventional molecular dynamics versus thermodynamic integration.

    PubMed

    Huang, Jinfeng; Zhu, Yali; Sun, Bin; Yao, Yuan; Liu, Junjun

    2016-03-01

    The protonation state of the Asp dyad is important as it can reveal enzymatic mechanisms, and the information this provides can be used in the development of drugs for proteins such as memapsin 2 (BACE-1), HIV-1 protease, and rennin. Conventional molecular dynamics (MD) simulations have been successfully used to determine the preferred protonation state of the Asp dyad. In the present work, we demonstrate that the results obtained from conventional MD simulations can be greatly influenced by the particular force field applied or the values used for control parameters. In principle, free-energy changes between possible protonation states can be used to determine the protonation state. We show that protonation state prediction by the thermodynamic integration (TI) method is insensitive to force field version or to the cutoff for calculating nonbonded interactions (a control parameter). In the present study, the protonation state of the Asp dyad predicted by TI calculations was the same regardless of the force field and cutoff value applied. Contrary to the intuition that conventional MD is more efficient, our results clearly show that the TI method is actually more efficient and more reliable for determining the protonation state of the Asp dyad.

  16. Present and foreseeable future of metabolomics in forensic analysis.

    PubMed

    Castillo-Peinado, L S; Luque de Castro, M D

    2016-06-21

    The revulsive publications during the last years on the precariousness of forensic sciences worldwide have promoted the move of major steps towards improvement of this science. One of the steps (viz. a higher involvement of metabolomics in the new era of forensic analysis) deserves to be discussed under different angles. Thus, the characteristics of metabolomics that make it a useful tool in forensic analysis, the aspects in which this omics is so far implicit, but not mentioned in forensic analyses, and how typical forensic parameters such as the post-mortem interval or fingerprints take benefits from metabolomics are critically discussed in this review. The way in which the metabolomics-forensic binomial succeeds when either conventional or less frequent samples are used is highlighted here. Finally, the pillars that should support future developments involving metabolomics and forensic analysis, and the research required for a fruitful in-depth involvement of metabolomics in forensic analysis are critically discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. In vivo quantification of demyelination and recovery using compartment-specific diffusion MRI metrics validated by electron microscopy.

    PubMed

    Jelescu, Ileana O; Zurek, Magdalena; Winters, Kerryanne V; Veraart, Jelle; Rajaratnam, Anjali; Kim, Nathanael S; Babb, James S; Shepherd, Timothy M; Novikov, Dmitry S; Kim, Sungheon G; Fieremans, Els

    2016-05-15

    There is a need for accurate quantitative non-invasive biomarkers to monitor myelin pathology in vivo and distinguish myelin changes from other pathological features including inflammation and axonal loss. Conventional MRI metrics such as T2, magnetization transfer ratio and radial diffusivity have proven sensitivity but not specificity. In highly coherent white matter bundles, compartment-specific white matter tract integrity (WMTI) metrics can be directly derived from the diffusion and kurtosis tensors: axonal water fraction, intra-axonal diffusivity, and extra-axonal radial and axial diffusivities. We evaluate the potential of WMTI to quantify demyelination by monitoring the effects of both acute (6weeks) and chronic (12weeks) cuprizone intoxication and subsequent recovery in the mouse corpus callosum, and compare its performance with that of conventional metrics (T2, magnetization transfer, and DTI parameters). The changes observed in vivo correlated with those obtained from quantitative electron microscopy image analysis. A 6-week intoxication produced a significant decrease in axonal water fraction (p<0.001), with only mild changes in extra-axonal radial diffusivity, consistent with patchy demyelination, while a 12-week intoxication caused a more marked decrease in extra-axonal radial diffusivity (p=0.0135), consistent with more severe demyelination and clearance of the extra-axonal space. Results thus revealed increased specificity of the axonal water fraction and extra-axonal radial diffusivity parameters to different degrees and patterns of demyelination. The specificities of these parameters were corroborated by their respective correlations with microstructural features: the axonal water fraction correlated significantly with the electron microscopy derived total axonal water fraction (ρ=0.66; p=0.0014) but not with the g-ratio, while the extra-axonal radial diffusivity correlated with the g-ratio (ρ=0.48; p=0.0342) but not with the electron microscopy derived axonal water fraction. These parameters represent promising candidates as clinically feasible biomarkers of demyelination and remyelination in the white matter. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. High-power and short-duration ablation for pulmonary vein isolation: Safety, efficacy, and long-term durability.

    PubMed

    Barkagan, Michael; Contreras-Valdes, Fernando M; Leshem, Eran; Buxton, Alfred E; Nakagawa, Hiroshi; Anter, Elad

    2018-05-30

    PV reconnection is often the result of catheter instability and tissue edema. High-power short-duration (HP-SD) ablation strategies have been shown to improve atrial linear continuity in acute pre-clinical models. This study compares the safety, efficacy and long-term durability of HP-SD ablation with conventional ablation. In 6 swine, 2 ablation lines were performed anterior and posterior to the crista terminalis, in the smooth and trabeculated right atrium, respectively; and the right superior PV was isolated. In 3 swine, ablation was performed using conventional parameters (THERMOCOOL-SMARTTOUCH ® SF; 30W/30 sec) and in 3 other swine using HP-SD parameters (QDOT-MICRO™, 90W/4 sec). After 30 days, linear integrity was examined by voltage mapping and pacing, and the heart and surrounding tissues were examined by histopathology. Acute line integrity was achieved with both ablation strategies; however, HP-SD ablation required 80% less RF time compared with conventional ablation (P≤0.01 for all lines). Chronic line integrity was higher with HP-SD ablation: all 3 posterior lines were continuous and transmural compared to only 1 line created by conventional ablation. In the trabeculated tissue, HP-SD ablation lesions were wider and of similar depth with 1 of 3 lines being continuous compared to 0 of 3 using conventional ablation. Chronic PVI without stenosis was evident in both groups. There were no steam-pops. Pleural markings were present in both strategies, but parenchymal lung injury was only evident with conventional ablation. HP-SD ablation strategy results in improved linear continuity, shorter ablation time, and a safety profile comparable to conventional ablation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Terahertz Spectroscopy for Proximal Soil Sensing: An Approach to Particle Size Analysis

    PubMed Central

    Dworak, Volker; Mahns, Benjamin; Selbeck, Jörn; Weltzien, Cornelia

    2017-01-01

    Spatially resolved soil parameters are some of the most important pieces of information for precision agriculture. These parameters, especially the particle size distribution (texture), are costly to measure by conventional laboratory methods, and thus, in situ assessment has become the focus of a new discipline called proximal soil sensing. Terahertz (THz) radiation is a promising method for nondestructive in situ measurements. The THz frequency range from 258 gigahertz (GHz) to 350 GHz provides a good compromise between soil penetration and the interaction of the electromagnetic waves with soil compounds. In particular, soil physical parameters influence THz measurements. This paper presents investigations of the spectral transmission signals from samples of different particle size fractions relevant for soil characterization. The sample thickness ranged from 5 to 17 mm. The transmission of THz waves was affected by the main mineral particle fractions, sand, silt and clay. The resulting signal changes systematically according to particle sizes larger than half the wavelength. It can be concluded that THz spectroscopic measurements provide information about soil texture and penetrate samples with thicknesses in the cm range. PMID:29048392

  20. Analysis of atmospheric pollutant metals by laser ablation inductively coupled plasma mass spectrometry with a radial line-scan dried-droplet approach

    NASA Astrophysics Data System (ADS)

    Tang, Xiaoxing; Qian, Yuan; Guo, Yanchuan; Wei, Nannan; Li, Yulan; Yao, Jian; Wang, Guanghua; Ma, Jifei; Liu, Wei

    2017-12-01

    A novel method has been improved for analyzing atmospheric pollutant metals (Be, Mn, Fe, Co, Ni, Cu, Zn, Se, Sr, Cd, and Pb) by laser ablation inductively coupled plasma mass spectrometry. In this method, solid standards are prepared by depositing droplets of aqueous standard solutions on the surface of a membrane filter, which is the same type as used for collecting atmospheric pollutant metals. Laser parameters were optimized, and ablation behaviors of the filter discs were studied. The mode of radial line scans across the filter disc was a representative ablation strategy and can avoid error from the inhomogeneous filter standards and marginal effect of the filter disc. Pt, as the internal standard, greatly improved the correlation coefficient of the calibration curve. The developed method provides low detection limits, from 0.01 ng m- 3 for Be and Co to 1.92 ng m- 3 for Fe. It was successfully applied for the determination of atmospheric pollutant metals collected in Lhasa, China. The analytical results showed good agreement with those obtained by conventional liquid analysis. In contrast to the conventional acid digestion procedure, the novel method not only greatly reduces sample preparation and shortens the analysis time but also provides a possible means for studying the spatial distribution of atmospheric filter samples.

Top