Sample records for specific linear accurate

  1. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  2. Volume changes in unrestrained structural lightweight concrete.

    DOT National Transportation Integrated Search

    1964-08-01

    In this study a comparator-type measuring system was developed to accurately determine volume change characteristics of one structural lightweight concrete. The specific properties studied were the coefficient of linear thermal expansion and unrestra...

  3. Application of Nearly Linear Solvers to Electric Power System Computation

    NASA Astrophysics Data System (ADS)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  4. One-dimensional wave bottom boundary layer model comparison: specific eddy viscosity and turbulence closure models

    USGS Publications Warehouse

    Puleo, J.A.; Mouraenko, O.; Hanes, D.M.

    2004-01-01

    Six one-dimensional-vertical wave bottom boundary layer models are analyzed based on different methods for estimating the turbulent eddy viscosity: Laminar, linear, parabolic, k—one equation turbulence closure, k−ε—two equation turbulence closure, and k−ω—two equation turbulence closure. Resultant velocity profiles, bed shear stresses, and turbulent kinetic energy are compared to laboratory data of oscillatory flow over smooth and rough beds. Bed shear stress estimates for the smooth bed case were most closely predicted by the k−ω model. Normalized errors between model predictions and measurements of velocity profiles over the entire computational domain collected at 15° intervals for one-half a wave cycle show that overall the linear model was most accurate. The least accurate were the laminar and k−ε models. Normalized errors between model predictions and turbulence kinetic energy profiles showed that the k−ω model was most accurate. Based on these findings, when the smallest overall velocity profile prediction error is required, the processing requirements and error analysis suggest that the linear eddy viscosity model is adequate. However, if accurate estimates of bed shear stress and TKE are required then, of the models tested, the k−ω model should be used.

  5. An evaluation of methods for estimating decadal stream loads

    NASA Astrophysics Data System (ADS)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  6. An evaluation of methods for estimating decadal stream loads

    USGS Publications Warehouse

    Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-01-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  7. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to be extracted from single-axis accelerometer data.

  8. Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations

    PubMed Central

    Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra

    2014-01-01

    An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052

  9. A globally well-posed finite element algorithm for aerodynamics applications

    NASA Technical Reports Server (NTRS)

    Iannelli, G. S.; Baker, A. J.

    1991-01-01

    A finite element CFD algorithm is developed for Euler and Navier-Stokes aerodynamic applications. For the linear basis, the resultant approximation is at least second-order-accurate in time and space for synergistic use of three procedures: (1) a Taylor weak statement, which provides for derivation of companion conservation law systems with embedded dispersion-error control mechanisms; (2) a stiffly stable second-order-accurate implicit Rosenbrock-Runge-Kutta temporal algorithm; and (3) a matrix tensor product factorization that permits efficient numerical linear algebra handling of the terminal large-matrix statement. Thorough analyses are presented regarding well-posed boundary conditions for inviscid and viscous flow specifications. Numerical solutions are generated and compared for critical evaluation of quasi-one- and two-dimensional Euler and Navier-Stokes benchmark test problems.

  10. Development and Validation of Different Ultraviolet-Spectrophotometric Methods for the Estimation of Besifloxacin in Different Simulated Body Fluids.

    PubMed

    Singh, C L; Singh, A; Kumar, S; Kumar, M; Sharma, P K; Majumdar, D K

    2015-01-01

    In the present study a simple, accurate, precise, economical and specific UV-spectrophotometric method for estimation of besifloxacin in bulk and in different pharmaceutical formulation has been developed. The drug shows maximum λmax289 nm in distilled water, simulated tears and phosphate buffer saline. The linearity range of developed methods were in the range of 3-30 μg/ml of drug with a correlation coefficient (r(2)) 0.9992, 0.9989 and 0.9984 with respect to distilled water, simulated tears and phosphate buffer saline, respectively. Reproducibility by repeating methods as %RSD were found to be less than 2%. The limit of detection in different media was found to be 0.62, 0.72 and 0.88 μg/ml, respectively. The limit of quantification was found to be 1.88, 2.10, 2.60 μg/ml, respectively. The proposed method was validated statically according to International Conference on Harmonization guidelines with respect to specificity, linearity, range, accuracy, precision and robustness. The proposed methods of validation were found to be accurate and highly specific for the estimation of besifloxacin in different pharmaceutical formulations.

  11. Development and application of a local linearization algorithm for the integration of quaternion rate equations in real-time flight simulation problems

    NASA Technical Reports Server (NTRS)

    Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.

    1973-01-01

    High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.

  12. Gestational dating by metabolic profile at birth: a California cohort study.

    PubMed

    Jelliffe-Pawlowski, Laura L; Norton, Mary E; Baer, Rebecca J; Santos, Nicole; Rutherford, George W

    2016-04-01

    Accurate gestational dating is a critical component of obstetric and newborn care. In the absence of early ultrasound, many clinicians rely on less accurate measures, such as last menstrual period or symphysis-fundal height during pregnancy, or Dubowitz scoring or the Ballard (or New Ballard) method at birth. These measures often underestimate or overestimate gestational age and can lead to misclassification of babies as born preterm, which has both short- and long-term clinical care and public health implications. We sought to evaluate whether metabolic markers in newborns measured as part of routine screening for treatable inborn errors of metabolism can be used to develop a population-level metabolic gestational dating algorithm that is robust despite intrauterine growth restriction and can be used when fetal ultrasound dating is not available. We focused specifically on the ability of these markers to differentiate preterm births (PTBs) (<37 weeks) from term births and to assign a specific gestational age in the PTB group. We evaluated a cohort of 729,503 singleton newborns with a California birth in 2005 through 2011 who had routine newborn metabolic screening and fetal ultrasound dating at 11-20 weeks' gestation. Using training and testing subsets (divided in a ratio of 3:1) we evaluated the association among PTB, target newborn characteristics, acylcarnitines, amino acids, thyroid-stimulating hormone, 17-hydroxyprogesterone, and galactose-1-phosphate-uridyl-transferase. We used multivariate backward stepwise regression to test for associations and linear discriminate analyses to create a linear function for PTB and to assign a specific week of gestation. We used sensitivity, specificity, and positive predictive value to evaluate the performance of linear functions. Along with birthweight and infant age at test, we included 35 of the 51 metabolic markers measured in the final multivariate model comparing PTBs and term births. Using a linear discriminate analyses-derived linear function, we were able to sort PTBs and term births accurately with sensitivities and specificities of ≥95% in both the training and testing subsets. Assignment of a specific week of gestation in those identified as PTBs resulted in the correct assignment of week ±2 weeks in 89.8% of all newborns in the training and 91.7% of those in the testing subset. When PTB rates were modeled using the metabolic dating algorithm compared to fetal ultrasound, PTB rates were 7.15% vs 6.11% in the training subset and 7.31% vs 6.25% in the testing subset. When considered in combination with birthweight and hours of age at test, metabolic profile evaluated within 8 days of birth appears to be a useful measure of PTB and, among those born preterm, of specific week of gestation ±2 weeks. Dating by metabolic profile may be useful in instances where there is no fetal ultrasound due to lack of availability or late entry into care. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Gestational dating by metabolic profile at birth: a California cohort study

    PubMed Central

    Jelliffe-Pawlowski, Laura L.; Norton, Mary E.; Baer, Rebecca J.; Santos, Nicole; Rutherford, George W.

    2016-01-01

    Background Accurate gestational dating is a critical component of obstetric and newborn care. In the absence of early ultrasound, many clinicians rely on less accurate measures, such as last menstrual period or symphysis-fundal height during pregnancy, or Dubowitz scoring or the Ballard (or New Ballard) method at birth. These measures often underestimate or overestimate gestational age and can lead to misclassification of babies as born preterm, which has both short- and long-term clinical care and public health implications. Objective We sought to evaluate whether metabolic markers in newborns measured as part of routine screening for treatable inborn errors of metabolism can be used to develop a population-level metabolic gestational dating algorithm that is robust despite intrauterine growth restriction and can be used when fetal ultrasound dating is not available. We focused specifically on the ability of these markers to differentiate preterm births (PTBs) (<37 weeks) from term births and to assign a specific gestational age in the PTB group. Study Design We evaluated a cohort of 729,503 singleton newborns with a California birth in 2005 through 2011 who had routine newborn metabolic screening and fetal ultrasound dating at 11–20 weeks’ gestation. Using training and testing subsets (divided in a ratio of 3:1) we evaluated the association among PTB, target newborn characteristics, acylcarnitines, amino acids, thyroid-stimulating hormone, 17-hydroxyprogesterone, and galactose-1-phosphate-uridyl-transferase. We used multivariate backward stepwise regression to test for associations and linear discriminate analyses to create a linear function for PTB and to assign a specific week of gestation. We used sensitivity, specificity, and positive predictive value to evaluate the performance of linear functions. Results Along with birthweight and infant age at test, we included 35 of the 51 metabolic markers measured in the final multivariate model comparing PTBs and term births. Using a linear discriminate analyses-derived linear function, we were able to sort PTBs and term births accurately with sensitivities and specificities of ≥95% in both the training and testing subsets. Assignment of a specific week of gestation in those identified as PTBs resulted in the correct assignment of week ±2 weeks in 89.8% of all newborns in the training and 91.7% of those in the testing subset. When PTB rates were modeled using the metabolic dating algorithm compared to fetal ultrasound, PTB rates were 7.15% vs 6.11% in the training subset and 7.31% vs 6.25% in the testing subset. Conclusion When considered in combination with birthweight and hours of age at test, metabolic profile evaluated within 8 days of birth appears to be a useful measure of PTB and, among those born preterm, of specific week of gestation ±2 weeks. Dating by metabolic profile may be useful in instances where there is no fetal ultrasound due to lack of availability or late entry into care. PMID:26688490

  14. Subject-specific bone attenuation correction for brain PET/MR: can ZTE-MRI substitute CT scan accurately?

    PubMed

    Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude

    2017-09-21

    In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units ([Formula: see text]) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into [Formula: see text] was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of [Formula: see text] corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.

  15. Subject-specific bone attenuation correction for brain PET/MR: can ZTE-MRI substitute CT scan accurately?

    NASA Astrophysics Data System (ADS)

    Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude

    2017-10-01

    In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.

  16. On the Development of Parameterized Linear Analytical Longitudinal Airship Models

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.

    2008-01-01

    In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.

  17. Linear signal noise summer accurately determines and controls S/N ratio

    NASA Technical Reports Server (NTRS)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  18. Cutting force measurement of electrical jigsaw by strain gauges

    NASA Astrophysics Data System (ADS)

    Kazup, L.; Varadine Szarka, A.

    2016-11-01

    This paper describes a measuring method based on strain gauges for accurate specification of electric jigsaw's cutting force. The goal of the measurement is to provide an overall perspective about generated forces in a jigsaw's gearbox during a cutting period. The lifetime of the tool is affected by these forces primarily. This analysis is part of the research and development project aiming to develop a special linear magnetic brake for realizing automatic lifetime tests of electric jigsaws or similar handheld tools. The accurate specification of cutting force facilitates to define realistic test cycles during the automatic lifetime test. The accuracy and precision resulted by the well described cutting force characteristic and the possibility of automation provide new dimension for lifetime testing of the handheld tools with alternating movement.

  19. UV Spectrophotometric Method for Estimation of Polypeptide-K in Bulk and Tablet Dosage Forms

    NASA Astrophysics Data System (ADS)

    Kaur, P.; Singh, S. Kumar; Gulati, M.; Vaidya, Y.

    2016-01-01

    An analytical method for estimation of polypeptide-k using UV spectrophotometry has been developed and validated for bulk as well as tablet dosage form. The developed method was validated for linearity, precision, accuracy, specificity, robustness, detection, and quantitation limits. The method has shown good linearity over the range from 100.0 to 300.0 μg/ml with a correlation coefficient of 0.9943. The percentage recovery of 99.88% showed that the method was highly accurate. The precision demonstrated relative standard deviation of less than 2.0%. The LOD and LOQ of the method were found to be 4.4 and 13.33, respectively. The study established that the proposed method is reliable, specific, reproducible, and cost-effective for the determination of polypeptide-k.

  20. Generic Airplane Model Concept and Four Specific Models Developed for Use in Piloted Simulation Studies

    NASA Technical Reports Server (NTRS)

    Hoffler, Keith D.; Fears, Scott P.; Carzoo, Susan W.

    1997-01-01

    A generic airplane model concept was developed to allow configurations with various agility, performance, handling qualities, and pilot vehicle interface to be generated rapidly for piloted simulation studies. The simple concept allows stick shaping and various stick command types or modes to drive an airplane with both linear and nonlinear components. Output from the stick shaping goes to linear models or a series of linear models that can represent an entire flight envelope. The generic model also has provisions for control power limitations, a nonlinear feature. Therefore, departures from controlled flight are possible. Note that only loss of control is modeled, the generic airplane does not accurately model post departure phenomenon. The model concept is presented herein, along with four example airplanes. Agility was varied across the four example airplanes without altering specific excess energy or significantly altering handling qualities. A new feedback scheme to provide angle-of-attack cueing to the pilot, while using a pitch rate command system, was implemented and tested.

  1. A novel stability-indicating UPLC method development and validation for the determination of seven impurities in various diclofenac pharmaceutical dosage forms.

    PubMed

    Azougagh, M; Elkarbane, M; Bakhous, K; Issmaili, S; Skalli, A; Iben Moussad, S; Benaji, B

    2016-09-01

    An innovative simple, fast, precise and accurate ultra-high performance liquid chromatography (UPLC) method was developed for the determination of diclofenac (Dic) along with its impurities including the new dimer impurity in various pharmaceutical dosage forms. An Acquity HSS T3 (C18, 100×2.1mm, 1.8μm) column in gradient mode was used with mobile phase comprising of phosphoric acid, which has a pH value of 2.3 and methanol. The flow rate and the injection volume were set at 0.35ml·min(-1) and 1μl, respectively, and the UV detection was carried out at 254nm by using photodiode array detector. Dic was subjected to stress conditions from acid, base, hydrolytic, thermal, oxidative and photolytic degradation. The new developed method was successfully validated in accordance to the International Conference on Harmonization (ICH) guidelines with respect to specificity, limit of detection, limit of quantitation, precision, linearity, accuracy and robustness. The degradation products were well resolved from main peak and its seven impurities, proving the specificity power of the method. The method showed good linearity with consistent recoveries for Dic content and its impurities. The relative percentage of standard deviation obtained for the repeatability and intermediate precision experiments was less than 3% and LOQ was less than 0.5μg·ml(-1) for all compounds. The new proposed method was found to be accurate, precise, specific, linear and robust. In addition, the method was successfully applied for the assay determination of Dic and its impurities in the several pharmaceutical dosage forms. Copyright © 2016 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.

  2. Development of a kernel function for clinical data.

    PubMed

    Daemen, Anneleen; De Moor, Bart

    2009-01-01

    For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.

  3. Prediction of forming limit in hydro-mechanical deep drawing of steel sheets using ductile fracture criterion

    NASA Astrophysics Data System (ADS)

    Oh, S.-T.; Chang, H.-J.; Oh, K. H.; Han, H. N.

    2006-04-01

    It has been observed that the forming limit curve at fracture (FLCF) of steel sheets, with a relatively higher ductility limit have linear shapes, similar to those of a bulk forming process. In contrast, the FLCF of sheets with a relatively lower ductility limit have rather complex shapes approaching the forming limit curve at neck (FLCN) towards the equi-biaxial strain paths. In this study, the FLCFs of steel sheets were measured and compared with the fracture strains predicted from specific ductile fracture criteria, including a criterion suggested by the authors, which can accurately describe FLCFs with both linear and complex shapes. To predict the forming limit for hydro-mechanical deep drawing of steel sheets, the ductile fracture criteria were integrated into a finite element simulation. The simulation, results based on the criterion suggested by authors accurately predicted the experimetal, fracture limits of steel sheets for the hydro-mechanical deep drawing process.

  4. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  5. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  6. A Fast Method to Calculate the Spatial Impulse Response for 1-D Linear Ultrasonic Phased Array Transducers

    PubMed Central

    Zou, Cheng; Sun, Zhenguo; Cai, Dong; Muhammad, Salman; Zhang, Wenzeng; Chen, Qiang

    2016-01-01

    A method is developed to accurately determine the spatial impulse response at the specifically discretized observation points in the radiated field of 1-D linear ultrasonic phased array transducers with great efficiency. In contrast, the previously adopted solutions only optimize the calculation procedure for a single rectangular transducer and required approximation considerations or nonlinear calculation. In this research, an algorithm that follows an alternative approach to expedite the calculation of the spatial impulse response of a rectangular linear array is presented. The key assumption for this algorithm is that the transducer apertures are identical and linearly distributed on an infinite rigid plane baffled with the same pitch. Two points in the observation field, which have the same position relative to two transducer apertures, share the same spatial impulse response that contributed from corresponding transducer, respectively. The observation field is discretized specifically to meet the relationship of equality. The analytical expressions of the proposed algorithm, based on the specific selection of the observation points, are derived to remove redundant calculations. In order to measure the proposed methodology, the simulation results obtained from the proposed method and the classical summation method are compared. The outcomes demonstrate that the proposed strategy can speed up the calculation procedure since it accelerates the speed-up ratio which relies upon the number of discrete points and the number of the array transducers. This development will be valuable in the development of advanced and faster linear ultrasonic phased array systems. PMID:27834799

  7. A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes.

    PubMed

    Vogl, Gregory W; Weiss, Brian A; Donmez, M Alkan

    2015-01-01

    A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a 'sensor box' to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality.

  8. A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes

    PubMed Central

    Vogl, Gregory W.; Weiss, Brian A.; Donmez, M. Alkan

    2017-01-01

    A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a ‘sensor box’ to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality. PMID:28691039

  9. Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves.

    PubMed

    Katekhaye, S; Kale, M S; Laddha, K S

    2012-01-01

    A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C(18) column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r(2)>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves.

  10. Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves

    PubMed Central

    Katekhaye, S; Kale, M. S.; Laddha, K. S.

    2012-01-01

    A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C18 column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r2>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves. PMID:23204626

  11. Validation of a Thin-Layer Chromatography for the Determination of Hydrocortisone Acetate and Lidocaine in a Pharmaceutical Preparation

    PubMed Central

    Dołowy, Małgorzata; Kulpińska-Kucia, Katarzyna; Pyka, Alina

    2014-01-01

    A new specific, precise, accurate, and robust TLC-densitometry has been developed for the simultaneous determination of hydrocortisone acetate and lidocaine hydrochloride in combined pharmaceutical formulation. The chromatographic analysis was carried out using a mobile phase consisting of chloroform + acetone + ammonia (25%) in volume composition 8 : 2 : 0.1 and silica gel 60F254 plates. Densitometric detection was performed in UV at wavelengths 200 nm and 250 nm, respectively, for lidocaine hydrochloride and hydrocortisone acetate. The validation of the proposed method was performed in terms of specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), precision, accuracy, and robustness. The applied TLC procedure is linear in hydrocortisone acetate concentration range of 3.75 ÷ 12.50 μg·spot−1, and from 1.00 ÷ 2.50 μg·spot−1 for lidocaine hydrochloride. The developed method was found to be accurate (the value of the coefficient of variation CV [%] is less than 3%), precise (CV [%] is less than 2%), specific, and robust. LOQ of hydrocortisone acetate is 0.198 μg·spot−1 and LOD is 0.066 μg·spot−1. LOQ and LOD values for lidocaine hydrochloride are 0.270 and 0.090 μg·spot−1, respectively. The assay value of both bioactive substances is consistent with the limits recommended by Pharmacopoeia. PMID:24526880

  12. Validation of a thin-layer chromatography for the determination of hydrocortisone acetate and lidocaine in a pharmaceutical preparation.

    PubMed

    Dołowy, Małgorzata; Kulpińska-Kucia, Katarzyna; Pyka, Alina

    2014-01-01

    A new specific, precise, accurate, and robust TLC-densitometry has been developed for the simultaneous determination of hydrocortisone acetate and lidocaine hydrochloride in combined pharmaceutical formulation. The chromatographic analysis was carried out using a mobile phase consisting of chloroform+acetone+ammonia (25%) in volume composition 8:2:0.1 and silica gel 60F254 plates. Densitometric detection was performed in UV at wavelengths 200 nm and 250 nm, respectively, for lidocaine hydrochloride and hydrocortisone acetate. The validation of the proposed method was performed in terms of specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), precision, accuracy, and robustness. The applied TLC procedure is linear in hydrocortisone acetate concentration range of 3.75÷12.50  μg·spot(-1), and from 1.00÷2.50  μg·spot(-1) for lidocaine hydrochloride. The developed method was found to be accurate (the value of the coefficient of variation CV [%] is less than 3%), precise (CV [%] is less than 2%), specific, and robust. LOQ of hydrocortisone acetate is 0.198  μg·spot(-1) and LOD is 0.066  μg·spot(-1). LOQ and LOD values for lidocaine hydrochloride are 0.270 and 0.090  μg·spot(-1), respectively. The assay value of both bioactive substances is consistent with the limits recommended by Pharmacopoeia.

  13. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE PAGES

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  14. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  15. Accurate artificial boundary conditions for the semi-discretized linear Schrödinger and heat equations on rectangular domains

    NASA Astrophysics Data System (ADS)

    Ji, Songsong; Yang, Yibo; Pang, Gang; Antoine, Xavier

    2018-01-01

    The aim of this paper is to design some accurate artificial boundary conditions for the semi-discretized linear Schrödinger and heat equations in rectangular domains. The Laplace transform in time and discrete Fourier transform in space are applied to get Green's functions of the semi-discretized equations in unbounded domains with single-source. An algorithm is given to compute these Green's functions accurately through some recurrence relations. Furthermore, the finite-difference method is used to discretize the reduced problem with accurate boundary conditions. Numerical simulations are presented to illustrate the accuracy of our method in the case of the linear Schrödinger and heat equations. It is shown that the reflection at the corners is correctly eliminated.

  16. A scientific and statistical analysis of accelerated aging for pharmaceuticals. Part 1: accuracy of fitting methods.

    PubMed

    Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L

    2014-10-01

    Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  17. A stopping criterion for the iterative solution of partial differential equations

    NASA Astrophysics Data System (ADS)

    Rao, Kaustubh; Malan, Paul; Perot, J. Blair

    2018-01-01

    A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.

  18. Group refractive index quantification using a Fourier domain short coherence Sagnac interferometer.

    PubMed

    Montonen, Risto; Kassamakov, Ivan; Lehmann, Peter; Österberg, Kenneth; Hæggström, Edward

    2018-02-15

    The group refractive index is important in length calibration of Fourier domain interferometers by transparent transfer standards. We demonstrate accurate group refractive index quantification using a Fourier domain short coherence Sagnac interferometer. Because of a justified linear length calibration function, the calibration constants cancel out in the evaluation of the group refractive index, which is then obtained accurately from two uncalibrated lengths. Measurements of two standard thickness coverslips revealed group indices of 1.5426±0.0042 and 1.5434±0.0046, with accuracies quoted at the 95% confidence level. This agreed with the dispersion data of the coverslip manufacturer and therefore validates our method. Our method provides a sample specific and accurate group refractive index quantification using the same Fourier domain interferometer that is to be calibrated for the length. This reduces significantly the requirements of the calibration transfer standard.

  19. Rotational cavity optomechanics

    NASA Astrophysics Data System (ADS)

    Wetzel, Wyatt; Rodenburg, B.; Ek, B.; Jha, A. K.; Bhattacharya, M.

    2017-04-01

    We consider optomechanics based on the exchange of orbital angular momentum between light and matter. Specifically we consider a nanoparticle levitated in an optical ring trap in a cavity. The motion of this particle is probed by an angular lattice created by two co-propagating beams carrying equal but opposite angular momenta. Firstwe consider the case where the lattice is weak, so the nanoparticle can execute complete rotations about the cavity axis. We establishanalytically the existence of a linear regime where accurate Doppler velocimetry can be performed on the nanoparticle, and also describe numerically the dynamics in the nonlinear regime where the velocimetry is no longer accurate. Second, we consider the case where the lattice is strong and the nanoparticle executes torsional motion about the cavity axis. We find the presence of an external torque introduces an instability, but can also be used to tune continuously the linear optomechanical coupling whose strength can be measured by homodyning the cavity output field. This research was supported by the National Science Foundation (NSF) (1454931), the Office of Naval Research (N00014-14-1-0803), and the Research Corporation for Science Advancement (20966).

  20. A hybrid-stress finite element approach for stress and vibration analysis in linear anisotropic elasticity

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Fly, Gerald W.; Mahadevan, L.

    1987-01-01

    A hybrid stress finite element method is developed for accurate stress and vibration analysis of problems in linear anisotropic elasticity. A modified form of the Hellinger-Reissner principle is formulated for dynamic analysis and an algorithm for the determination of the anisotropic elastic and compliance constants from experimental data is developed. These schemes were implemented in a finite element program for static and dynamic analysis of linear anisotropic two dimensional elasticity problems. Specific numerical examples are considered to verify the accuracy of the hybrid stress approach and compare it with that of the standard displacement method, especially for highly anisotropic materials. It is that the hybrid stress approach gives much better results than the displacement method. Preliminary work on extensions of this method to three dimensional elasticity is discussed, and the stress shape functions necessary for this extension are included.

  1. Predicting Cortisol Exposure from Paediatric Hydrocortisone Formulation Using a Semi-Mechanistic Pharmacokinetic Model Established in Healthy Adults.

    PubMed

    Melin, Johanna; Parra-Guillen, Zinnia P; Hartung, Niklas; Huisinga, Wilhelm; Ross, Richard J; Whitaker, Martin J; Kloft, Charlotte

    2018-04-01

    Optimisation of hydrocortisone replacement therapy in children is challenging as there is currently no licensed formulation and dose in Europe for children under 6 years of age. In addition, hydrocortisone has non-linear pharmacokinetics caused by saturable plasma protein binding. A paediatric hydrocortisone formulation, Infacort ® oral hydrocortisone granules with taste masking, has therefore been developed. The objective of this study was to establish a population pharmacokinetic model based on studies in healthy adult volunteers to predict hydrocortisone exposure in paediatric patients with adrenal insufficiency. Cortisol and binding protein concentrations were evaluated in the absence and presence of dexamethasone in healthy volunteers (n = 30). Dexamethasone was used to suppress endogenous cortisol concentrations prior to and after single doses of 0.5, 2, 5 and 10 mg of Infacort ® or 20 mg of Infacort ® /hydrocortisone tablet/hydrocortisone intravenously. A plasma protein binding model was established using unbound and total cortisol concentrations, and sequentially integrated into the pharmacokinetic model. Both specific (non-linear) and non-specific (linear) protein binding were included in the cortisol binding model. A two-compartment disposition model with saturable absorption and constant endogenous cortisol baseline (Baseline cort ,15.5 nmol/L) described the data accurately. The predicted cortisol exposure for a given dose varied considerably within a small body weight range in individuals weighing <20 kg. Our semi-mechanistic population pharmacokinetic model for hydrocortisone captures the complex pharmacokinetics of hydrocortisone in a simplified but comprehensive framework. The predicted cortisol exposure indicated the importance of defining an accurate hydrocortisone dose to mimic physiological concentrations for neonates and infants weighing <20 kg. EudraCT number: 2013-000260-28, 2013-000259-42.

  2. Linear Modeling and Evaluation of Controls on Flow Response in Western Post-Fire Watersheds

    NASA Astrophysics Data System (ADS)

    Saxe, S.; Hogue, T. S.; Hay, L.

    2015-12-01

    This research investigates the impact of wildfires on watershed flow regimes throughout the western United States, specifically focusing on evaluation of fire events within specified subregions and determination of the impact of climate and geophysical variables in post-fire flow response. Fire events were collected through federal and state-level databases and streamflow data were collected from U.S. Geological Survey stream gages. 263 watersheds were identified with at least 10 years of continuous pre-fire daily streamflow records and 5 years of continuous post-fire daily flow records. For each watershed, percent changes in runoff ratio (RO), annual seven day low-flows (7Q2) and annual seven day high-flows (7Q10) were calculated from pre- to post-fire. Numerous independent variables were identified for each watershed and fire event, including topographic, land cover, climate, burn severity, and soils data. The national watersheds were divided into five regions through K-clustering and a lasso linear regression model, applying the Leave-One-Out calibration method, was calculated for each region. Nash-Sutcliffe Efficiency (NSE) was used to determine the accuracy of the resulting models. The regions encompassing the United States along and west of the Rocky Mountains, excluding the coastal watersheds, produced the most accurate linear models. The Pacific coast region models produced poor and inconsistent results, indicating that the regions need to be further subdivided. Presently, RO and HF response variables appear to be more easily modeled than LF. Results of linear regression modeling showed varying importance of watershed and fire event variables, with conflicting correlation between land cover types and soil types by region. The addition of further independent variables and constriction of current variables based on correlation indicators is ongoing and should allow for more accurate linear regression modeling.

  3. Calibration of the optical torque wrench.

    PubMed

    Pedaci, Francesco; Huang, Zhuangxiong; van Oene, Maarten; Dekker, Nynke H

    2012-02-13

    The optical torque wrench is a laser trapping technique that expands the capability of standard optical tweezers to torque manipulation and measurement, using the laser linear polarization to orient tailored microscopic birefringent particles. The ability to measure torque of the order of kBT (∼4 pN nm) is especially important in the study of biophysical systems at the molecular and cellular level. Quantitative torque measurements rely on an accurate calibration of the instrument. Here we describe and implement a set of calibration approaches for the optical torque wrench, including methods that have direct analogs in linear optical tweezers as well as introducing others that are specifically developed for the angular variables. We compare the different methods, analyze their differences, and make recommendations regarding their implementations.

  4. Component-specific modeling

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1985-01-01

    A series of interdisciplinary modeling and analysis techniques that were specialized to address three specific hot section components are presented. These techniques will incorporate data as well as theoretical methods from many diverse areas including cycle and performance analysis, heat transfer analysis, linear and nonlinear stress analysis, and mission analysis. Building on the proven techniques already available in these fields, the new methods developed will be integrated into computer codes to provide an accurate, and unified approach to analyzing combustor burner liners, hollow air cooled turbine blades, and air cooled turbine vanes. For these components, the methods developed will predict temperature, deformation, stress and strain histories throughout a complete flight mission.

  5. Effect of educational preparation on the accuracy of linear growth measurement in pediatric primary care practices: results of a multicenter nursing study.

    PubMed

    Hench, Karen D; Shults, Justine; Benyi, Terri; Clow, Cheryl; Delaune, Joanne; Gilluly, Kathy; Johnson, Lydia; Johnson, Maryann; Rossiter, Katherine; McKnight-Menci, Heather; Shorkey, Doris; Waite, Fran; Weber, Colleen; Lipman, Terri H

    2005-04-01

    Consistently monitoring a child's linear growth is one of the least invasive, most sensitive tools to identify normal physiologic functioning and a healthy lifestyle. However, studies, mostly from the United Kingdom, indicate that children are frequently measured incorrectly. Inaccurate linear measurements may result in some children having undetected growth disorders whereas others with normal growth being referred for costly, unwarranted specialty evaluations. This study presents the secondary analysis of a primary study that used a randomized control study design to demonstrate that a didactic educational intervention resulted in significantly more children being measured accurately within eight pediatric practices. The secondary analysis explored the influence of the measurer's educational level on the outcome of accurate linear measurement. Results indicated that RNs were twice as likely as non-RNs to measure children accurately.

  6. Estimating health state utility values for comorbid health conditions using SF-6D data.

    PubMed

    Ara, Roberta; Brazier, John

    2011-01-01

    When health state utility values for comorbid health conditions are not available, data from cohorts with single conditions are used to estimate scores. The methods used can produce very different results and there is currently no consensus on which is the most appropriate approach. The objective of the current study was to compare the accuracy of five different methods within the same dataset. Data collected during five Welsh Health Surveys were subgrouped by health status. Mean short-form 6 dimension (SF-6D) scores for cohorts with a specific health condition were used to estimate mean SF-6D scores for cohorts with comorbid conditions using the additive, multiplicative, and minimum methods, the adjusted decrement estimator (ADE), and a linear regression model. The mean SF-6D for subgroups with comorbid health conditions ranged from 0.4648 to 0.6068. The linear model produced the most accurate scores for the comorbid health conditions with 88% of values accurate to within the minimum important difference for the SF-6D. The additive and minimum methods underestimated or overestimated the actual SF-6D scores respectively. The multiplicative and ADE methods both underestimated the majority of scores. However, both methods performed better when estimating scores smaller than 0.50. Although the range in actual health state utility values (HSUVs) was relatively small, our data covered the lower end of the index and the majority of previous research has involved actual HSUVs at the upper end of possible ranges. Although the linear model gave the most accurate results in our data, additional research is required to validate our findings. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Semi-nonparametric VaR forecasts for hedge funds during the recent crisis

    NASA Astrophysics Data System (ADS)

    Del Brio, Esther B.; Mora-Valencia, Andrés; Perote, Javier

    2014-05-01

    The need to provide accurate value-at-risk (VaR) forecasting measures has triggered an important literature in econophysics. Although these accurate VaR models and methodologies are particularly demanded for hedge fund managers, there exist few articles specifically devoted to implement new techniques in hedge fund returns VaR forecasting. This article advances in these issues by comparing the performance of risk measures based on parametric distributions (the normal, Student’s t and skewed-t), semi-nonparametric (SNP) methodologies based on Gram-Charlier (GC) series and the extreme value theory (EVT) approach. Our results show that normal-, Student’s t- and Skewed t- based methodologies fail to forecast hedge fund VaR, whilst SNP and EVT approaches accurately success on it. We extend these results to the multivariate framework by providing an explicit formula for the GC copula and its density that encompasses the Gaussian copula and accounts for non-linear dependences. We show that the VaR obtained by the meta GC accurately captures portfolio risk and outperforms regulatory VaR estimates obtained through the meta Gaussian and Student’s t distributions.

  8. High-resolution vertical profiles of groundwater electrical conductivity (EC) and chloride from direct-push EC logs

    NASA Astrophysics Data System (ADS)

    Bourke, Sarah A.; Hermann, Kristian J.; Hendry, M. Jim

    2017-11-01

    Elevated groundwater salinity associated with produced water, leaching from landfills or secondary salinity can degrade arable soils and potable water resources. Direct-push electrical conductivity (EC) profiling enables rapid, relatively inexpensive, high-resolution in-situ measurements of subsurface salinity, without requiring core collection or installation of groundwater wells. However, because the direct-push tool measures the bulk EC of both solid and liquid phases (ECa), incorporation of ECa data into regional or historical groundwater data sets requires the prediction of pore water EC (ECw) or chloride (Cl-) concentrations from measured ECa. Statistical linear regression and physically based models for predicting ECw and Cl- from ECa profiles were tested on a brine plume in central Saskatchewan, Canada. A linear relationship between ECa/ECw and porosity was more accurate for predicting ECw and Cl- concentrations than a power-law relationship (Archie's Law). Despite clay contents of up to 96%, the addition of terms to account for electrical conductance in the solid phase did not improve model predictions. In the absence of porosity data, statistical linear regression models adequately predicted ECw and Cl- concentrations from direct-push ECa profiles (ECw = 5.48 ECa + 0.78, R 2 = 0.87; Cl- = 1,978 ECa - 1,398, R 2 = 0.73). These statistical models can be used to predict ECw in the absence of lithologic data and will be particularly useful for initial site assessments. The more accurate linear physically based model can be used to predict ECw and Cl- as porosity data become available and the site-specific ECw-Cl- relationship is determined.

  9. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  10. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    NASA Astrophysics Data System (ADS)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  11. The Fermi-Pasta-Ulam Problem and Its Underlying Integrable Dynamics: An Approach Through Lyapunov Exponents

    NASA Astrophysics Data System (ADS)

    Benettin, G.; Pasquali, S.; Ponno, A.

    2018-05-01

    FPU models, in dimension one, are perturbations either of the linear model or of the Toda model; perturbations of the linear model include the usual β -model, perturbations of Toda include the usual α +β model. In this paper we explore and compare two families, or hierarchies, of FPU models, closer and closer to either the linear or the Toda model, by computing numerically, for each model, the maximal Lyapunov exponent χ . More precisely, we consider statistically typical trajectories and study the asymptotics of χ for large N (the number of particles) and small ɛ (the specific energy E / N), and find, for all models, asymptotic power laws χ ˜eq Cɛ ^a, C and a depending on the model. The asymptotics turns out to be, in general, rather slow, and producing accurate results requires a great computational effort. We also revisit and extend the analytic computation of χ introduced by Casetti, Livi and Pettini, originally formulated for the β -model. With great evidence the theory extends successfully to all models of the linear hierarchy, but not to models close to Toda.

  12. Robust control of the DC-DC boost converter based on the uncertainty and disturbance estimator

    NASA Astrophysics Data System (ADS)

    Oucheriah, Said

    2017-11-01

    In this paper, a robust non-linear controller based on the uncertainty and disturbance estimator (UDE) scheme is successfully developed and implemented for the output voltage regulation of the DC-DC boost converter. System uncertainties, external disturbances and unknown non-linear dynamics are lumped as a signal that is accurately estimated using a low-pass filter and their effects are cancelled by the controller. This methodology forms the basis of the UDE-based controller. A simple procedure is also developed that systematically determines the parameters of the controller to meet certain specifications. Using simulation, the effectiveness of the proposed controller is compared against the sliding-mode control (SMC). Experimental tests also show that the proposed controller is robust to system uncertainties, large input and load perturbations.

  13. Simultaneous Determination of Ofloxacin and Flavoxate Hydrochloride by Absorption Ratio and Second Derivative UV Spectrophotometry

    PubMed Central

    Attimarad, Mahesh

    2010-01-01

    The objective of this study was to develop simple, precise, accurate and sensitive UV spectrophotometric methods for the simultaneous determination of ofloxacin (OFX) and flavoxate HCl (FLX) in pharmaceutical formulations. The first method is based on absorption ratio method, by formation of Q absorbance equation at 289 nm (λmax of OFX) and 322.4 nm (isoabsorptive point). The linearity range was found to be 1 to 30 μg/ml for FLX and OFX. In the method-II second derivative absorption at 311.4 nm for OFX (zero crossing for FLX) and at 246.2 nm for FLX (zero crossing for OFX) was used for the determination of the drugs and the linearity range was found to be 2 to 30 μg/ml for OFX and 2-75 μg /ml for FLX. The accuracy and precision of the methods were determined and validated statistically. Both the methods showed good reproducibility and recovery with % RSD less than 1.5%. Both the methods were found to be rapid, specific, precise and accurate and can be successfully applied for the routine analysis of OFX and FLX in combined dosage form PMID:24826003

  14. Sex-specific lean body mass predictive equations are accurate in the obese paediatric population

    PubMed Central

    Jackson, Lanier B.; Henshaw, Melissa H.; Carter, Janet; Chowdhury, Shahryar M.

    2015-01-01

    Background The clinical assessment of lean body mass (LBM) is challenging in obese children. A sex-specific predictive equation for LBM derived from anthropometric data was recently validated in children. Aim The purpose of this study was to independently validate these predictive equations in the obese paediatric population. Subjects and methods Obese subjects aged 4–21 were analysed retrospectively. Predicted LBM (LBMp) was calculated using equations previously developed in children. Measured LBM (LBMm) was derived from dual-energy x-ray absorptiometry. Agreement was expressed as [(LBMm-LBMp)/LBMm] with 95% limits of agreement. Results Of 310 enrolled patients, 195 (63%) were females. The mean age was 11.8 ± 3.4 years and mean BMI Z-score was 2.3 ± 0.4. The average difference between LBMm and LBMp was −0.6% (−17.0%, 15.8%). Pearson’s correlation revealed a strong linear relationship between LBMm and LBMp (r=0.97, p<0.01). Conclusion This study validates the use of these clinically-derived sex-specific LBM predictive equations in the obese paediatric population. Future studies should use these equations to improve the ability to accurately classify LBM in obese children. PMID:26287383

  15. Recent work on material interface reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosso, S.J.; Swartz, B.K.

    1997-12-31

    For the last 15 years, many Eulerian codes have relied on a series of piecewise linear interface reconstruction algorithms developed by David Youngs. In a typical Youngs` method, the material interfaces were reconstructed based upon nearly cell values of volume fractions of each material. The interfaces were locally represented by linear segments in two dimensions and by pieces of planes in three dimensions. The first step in such reconstruction was to locally approximate an interface normal. In Youngs` 3D method, a local gradient of a cell-volume-fraction function was estimated and taken to be the local interface normal. A linear interfacemore » was moved perpendicular to the now known normal until the mass behind it matched the material volume fraction for the cell in question. But for distorted or nonorthogonal meshes, the gradient normal estimate didn`t accurately match that of linear material interfaces. Moreover, curved material interfaces were also poorly represented. The authors will present some recent work in the computation of more accurate interface normals, without necessarily increasing stencil size. Their estimate of the normal is made using an iterative process that, given mass fractions for nearby cells of known but arbitrary variable density, converges in 3 or 4 passes in practice (and quadratically--like Newton`s method--in principle). The method reproduces a linear interface in both orthogonal and nonorthogonal meshes. The local linear approximation is generally 2nd-order accurate, with a 1st-order accurate normal for curved interfaces in both two and three dimensional polyhedral meshes. Recent work demonstrating the interface reconstruction for curved surfaces will /be discussed.« less

  16. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    NASA Astrophysics Data System (ADS)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  17. Single shot trajectory design for region-specific imaging using linear and nonlinear magnetic encoding fields.

    PubMed

    Layton, Kelvin J; Gallichan, Daniel; Testud, Frederik; Cocosco, Chris A; Welz, Anna M; Barmet, Christoph; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim

    2013-09-01

    It has recently been demonstrated that nonlinear encoding fields result in a spatially varying resolution. This work develops an automated procedure to design single-shot trajectories that create a local resolution improvement in a region of interest. The technique is based on the design of optimized local k-space trajectories and can be applied to arbitrary hardware configurations that employ any number of linear and nonlinear encoding fields. The trajectories designed in this work are tested with the currently available hardware setup consisting of three standard linear gradients and two quadrupolar encoding fields generated from a custom-built gradient insert. A field camera is used to measure the actual encoding trajectories up to third-order terms, enabling accurate reconstructions of these demanding single-shot trajectories, although the eddy current and concomitant field terms of the gradient insert have not been completely characterized. The local resolution improvement is demonstrated in phantom and in vivo experiments. Copyright © 2012 Wiley Periodicals, Inc.

  18. Sedimentation of knotted polymers

    NASA Astrophysics Data System (ADS)

    Piili, J.; Marenduzzo, D.; Kaski, K.; Linna, R. P.

    2013-01-01

    We investigate the sedimentation of knotted polymers by means of stochastic rotation dynamics, a molecular dynamics algorithm that takes hydrodynamics fully into account. We show that the sedimentation coefficient s, related to the terminal velocity of the knotted polymers, increases linearly with the average crossing number nc of the corresponding ideal knot. This provides direct computational confirmation of this relation, postulated on the basis of sedimentation experiments by Rybenkov [J. Mol. Biol.10.1006/jmbi.1996.0876 267, 299 (1997)]. Such a relation was previously shown to hold with simulations for knot electrophoresis. We also show that there is an accurate linear dependence of s on the inverse of the radius of gyration Rg-1, more specifically with the inverse of the Rg component that is perpendicular to the direction along which the polymer sediments. When the polymer sediments in a slab, the walls affect the results appreciably. However, Rg-1 remains to a good precision linearly dependent on nc. Therefore, Rg-1 is a good measure of a knot's complexity.

  19. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  20. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  1. Estimating normative limits of Heidelberg Retina Tomograph optic disc rim area with quantile regression.

    PubMed

    Artes, Paul H; Crabb, David P

    2010-01-01

    To investigate why the specificity of the Moorfields Regression Analysis (MRA) of the Heidelberg Retina Tomograph (HRT) varies with disc size, and to derive accurate normative limits for neuroretinal rim area to address this problem. Two datasets from healthy subjects (Manchester, UK, n = 88; Halifax, Nova Scotia, Canada, n = 75) were used to investigate the physiological relationship between the optic disc and neuroretinal rim area. Normative limits for rim area were derived by quantile regression (QR) and compared with those of the MRA (derived by linear regression). Logistic regression analyses were performed to quantify the association between disc size and positive classifications with the MRA, as well as with the QR-derived normative limits. In both datasets, the specificity of the MRA depended on optic disc size. The odds of observing a borderline or outside-normal-limits classification increased by approximately 10% for each 0.1 mm(2) increase in disc area (P < 0.1). The lower specificity of the MRA with large optic discs could be explained by the failure of linear regression to model the extremes of the rim area distribution (observations far from the mean). In comparison, the normative limits predicted by QR were larger for smaller discs (less specific, more sensitive), and smaller for larger discs, such that false-positive rates became independent of optic disc size. Normative limits derived by quantile regression appear to remove the size-dependence of specificity with the MRA. Because quantile regression does not rely on the restrictive assumptions of standard linear regression, it may be a more appropriate method for establishing normative limits in other clinical applications where the underlying distributions are nonnormal or have nonconstant variance.

  2. Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals

    NASA Astrophysics Data System (ADS)

    Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling

    2018-04-01

    An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.

  3. Highly Accurate Quartic Force Fields, Vibrational Frequencies, and Spectroscopic Constants for Cyclic and Linear C3H3(+)

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Taylor, Peter R.; Lee, Timothy J.

    2011-01-01

    High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C H + molecular cation, referred to as c-C H + and I-C H +. Specifically the 33 3333 singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants using both vibrational 2nd-order perturbation theory and variational methods to solve the nuclear Schroedinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C H +, 33 obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C H + 33 and I-C H + are the most reliable available for the free gas-phase species and it is hoped that 33 these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations.

  4. Inferring phenomenological models of Markov processes from data

    NASA Astrophysics Data System (ADS)

    Rivera, Catalina; Nemenman, Ilya

    Microscopically accurate modeling of stochastic dynamics of biochemical networks is hard due to the extremely high dimensionality of the state space of such networks. Here we propose an algorithm for inference of phenomenological, coarse-grained models of Markov processes describing the network dynamics directly from data, without the intermediate step of microscopically accurate modeling. The approach relies on the linear nature of the Chemical Master Equation and uses Bayesian Model Selection for identification of parsimonious models that fit the data. When applied to synthetic data from the Kinetic Proofreading process (KPR), a common mechanism used by cells for increasing specificity of molecular assembly, the algorithm successfully uncovers the known coarse-grained description of the process. This phenomenological description has been notice previously, but this time it is derived in an automated manner by the algorithm. James S. McDonnell Foundation Grant No. 220020321.

  5. Patient-specific non-linear finite element modelling for predicting soft organ deformation in real-time: application to non-rigid neuroimage registration.

    PubMed

    Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol

    2010-12-01

    Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Numerical solution of non-linear dual-phase-lag bioheat transfer equation within skin tissues.

    PubMed

    Kumar, Dinesh; Kumar, P; Rai, K N

    2017-11-01

    This paper deals with numerical modeling and simulation of heat transfer in skin tissues using non-linear dual-phase-lag (DPL) bioheat transfer model under periodic heat flux boundary condition. The blood perfusion is assumed temperature-dependent which results in non-linear DPL bioheat transfer model in order to predict more accurate results. A numerical method of line which is based on finite difference and Runge-Kutta (4,5) schemes, is used to solve the present non-linear problem. Under specific case, the exact solution has been obtained and compared with the present numerical scheme, and we found that those are in good agreement. A comparison based on model selection criterion (AIC) has been made among non-linear DPL models when the variation of blood perfusion rate with temperature is of constant, linear and exponential type with the experimental data and it has been found that non-linear DPL model with exponential variation of blood perfusion rate is closest to the experimental data. In addition, it is found that due to absence of phase-lag phenomena in Pennes bioheat transfer model, it achieves steady state more quickly and always predict higher temperature than thermal and DPL non-linear models. The effect of coefficient of blood perfusion rate, dimensionless heating frequency and Kirchoff number on dimensionless temperature distribution has also been analyzed. The whole analysis is presented in dimensionless form. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Single and multidimensional measurements underestimate neuroblastoma response to therapy.

    PubMed

    Trout, Andrew T; Towbin, Alexander J; Klingbeil, Lindsey; Weiss, Brian D; von Allmen, Daniel

    2017-01-01

    Changes in three-dimensional (3D) measurements of neuroblastoma are used to assess response. Linear measurements may not accurately characterize tumor size due to the infiltrative character of these tumors. The purpose of this study was to assess the accuracy of one-dimensional (1D), two-dimensional (2D), and 3D measurements in characterizing neuroblastoma response compared to a reference standard of tumor volume. We retrospectively reviewed imaging for 34 patients with stage 3 or 4 neuroblastoma. Blinded readers contoured or made linear measurements of tumors. Correlation coefficients were used to compare linear measurements to volumetric and 3D measurements. Bland-Altman analyses were used to assess bias between measurements. Sensitivity and specificity for patient events and survival were calculated for each measurement technique. Mean patient age was 2.9 ± 3.0 years (range 0-15 years). There was strong correlation between volumetric and 1D (r = 0.78, P < 0.0001), 2D (r = 0.86, P < 0.0001), and 3D (r = 0.88, P < 0.0001) measurements. Mean bias between volumetric measurements and 1D, 2D, and 3D measurements was 37.1% (95% limits: 6.2-67.9%), 16.1% (95% limits: -11.7-43.8%), and 7.7% (95% limits: -19.7-35.1%), respectively. 1D and 2D measurements undercategorized response versus volumetric change in 88.2% (30/34) and 29.4% (10/34) of cases. 3D measurements incorrectly characterized response in 16.7% (4/24) of cases versus volumetric change. 3D measurements were highly sensitive for patient events and survival, but all measurement techniques had poor specificity. 3D measurements most accurately quantify neuroblastoma size response versus volumetric change in patients with stage 3 and 4 neuroblastoma. 1D and 2D measurements underrepresent tumor response. © 2016 Wiley Periodicals, Inc.

  8. Dislocation dynamics in hexagonal close-packed crystals

    DOE PAGES

    Aubry, S.; Rhee, M.; Hommes, G.; ...

    2016-04-14

    Extensions of the dislocation dynamics methodology necessary to enable accurate simulations of crystal plasticity in hexagonal close-packed (HCP) metals are presented. They concern the introduction of dislocation motion in HCP crystals through linear and non-linear mobility laws, as well as the treatment of composite dislocation physics. Formation, stability and dissociation of and other dislocations with large Burgers vectors defined as composite dislocations are examined and a new topological operation is proposed to enable their dissociation. Furthermore, the results of our simulations suggest that composite dislocations are omnipresent and may play important roles both in specific dislocation mechanisms and in bulkmore » crystal plasticity in HCP materials. While fully microscopic, our bulk DD simulations provide wealth of data that can be used to develop and parameterize constitutive models of crystal plasticity at the mesoscale.« less

  9. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  10. A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants

    PubMed Central

    Ewen, James P.; Gattinoni, Chiara; Thakkar, Foram M.; Morgan, Neal; Spikes, Hugh A.; Dini, Daniele

    2016-01-01

    For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n-hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n-hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n-hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed. PMID:28773773

  11. A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants.

    PubMed

    Ewen, James P; Gattinoni, Chiara; Thakkar, Foram M; Morgan, Neal; Spikes, Hugh A; Dini, Daniele

    2016-08-02

    For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n -hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n -hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n -hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed.

  12. Microbiological assay for the determination of meropenem in pharmaceutical dosage form.

    PubMed

    Mendez, Andreas S L; Weisheimer, Vanessa; Oppe, Tércio P; Steppe, Martin; Schapoval, Elfrides E S

    2005-04-01

    Meropenem is a highly active carbapenem antibiotic used in the treatment of a wide range of serious infections. The present work reports a microbiological assay, applying the cylinder-plate method, for the determination of meropenem in powder for injection. The validation method yielded good results and included linearity, precision, accuracy and specificity. The assay is based on the inhibitory effect of meropenem upon the strain of Micrococcus luteus ATCC 9341 used as the test microorganism. The results of assay were treated statistically by analysis of variance (ANOVA) and were found to be linear (r=0.9999) in the range of 1.5-6.0 microg ml(-1), precise (intra-assay: R.S.D.=0.29; inter-assay: R.S.D.=0.94) and accurate. A preliminary stability study of meropenem was performed to show that the microbiological assay is specific for the determination of meropenem in the presence of its degradation products. The degraded samples were also analysed by the HPLC method. The proposed method allows the quantitation of meropenem in pharmaceutical dosage form and can be used for the drug analysis in routine quality control.

  13. A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena

    NASA Technical Reports Server (NTRS)

    Zingg, David W.

    1996-01-01

    This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.

  14. Towards Dynamic Contrast Specific Ultrasound Tomography

    NASA Astrophysics Data System (ADS)

    Demi, Libertario; van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo

    2016-10-01

    We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast.

  15. Towards Dynamic Contrast Specific Ultrasound Tomography.

    PubMed

    Demi, Libertario; Van Sloun, Ruud J G; Wijkstra, Hessel; Mischi, Massimo

    2016-10-05

    We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast.

  16. Towards Dynamic Contrast Specific Ultrasound Tomography

    PubMed Central

    Demi, Libertario; Van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo

    2016-01-01

    We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast. PMID:27703251

  17. Accurate Solution of Multi-Region Continuum Biomolecule Electrostatic Problems Using the Linearized Poisson-Boltzmann Equation with Curved Boundary Elements

    PubMed Central

    Altman, Michael D.; Bardhan, Jaydeep P.; White, Jacob K.; Tidor, Bruce

    2009-01-01

    We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson–Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Finally, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as non-rigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry, such as charge optimization or component analysis, can be computed to high accuracy using the presented BEM approach, in compute times comparable to traditional finite-difference methods. PMID:18567005

  18. Maternal Weight Gain as a Predictor of Litter Size in Swiss Webster, C57BL/6J, and BALB/cJ mice.

    PubMed

    Finlay, James B; Liu, Xueli; Ermel, Richard W; Adamson, Trinka W

    2015-11-01

    An important task facing both researchers and animal core facilities is producing sufficient mice for a given project. The inherent biologic variability of mouse reproduction and litter size further challenges effective research planning. A lack of precision in project planning contributes to the high cost of animal research, overproduction (thus waste) of animals, and inappropriate allocation of facility resources. To examine the extent daily prepartum maternal weight gain predicts litter size in 2 commonly used mouse strains (BALB/cJ and C57BL/6J) and one mouse stock (Swiss Webster), we weighed ≥ 25 pregnant dams of each strain or stock daily from the morning on which a vaginal plug (day 0) was present. On the morning when dams delivered their pups, we recorded the weight of the dam, the weight of the litter itself, and the number of pups. Litter sizes ranged from 1 to 7 pups for BALB/cJ, 2 to 13 for Swiss Webster, and 5 to 11 for C57BL/6J mice. Linear regression models (based on weight change from day 0) demonstrated that maternal weight gain at day 9 (BALB/cJ), day 11 (Swiss Webster), or day 14 (C57BL/6J) was a significant predictor of litter size. When tested prospectively, the linear regression model for each strain or stock was found to be accurate. These data indicate that the number of pups that will be born can be estimated accurately by using maternal weight gain at specific or stock-specific time points.

  19. Linear Look-Ahead in Conjunctive Cells: An Entorhinal Mechanism for Vector-Based Navigation

    PubMed Central

    Kubie, John L.; Fenton, André A.

    2012-01-01

    The crisp organization of the “firing bumps” of entorhinal grid cells and conjunctive cells leads to the notion that the entorhinal cortex may compute linear navigation routes. Specifically, we propose a process, termed “linear look-ahead,” by which a stationary animal could compute a series of locations in the direction it is facing. We speculate that this computation could be achieved through learned patterns of connection strengths among entorhinal neurons. This paper has three sections. First, we describe the minimal grid cell properties that will be built into our network. Specifically, the network relies on “rigid modules” of neurons, where all members have identical grid scale and orientation, but differ in spatial phase. Additionally, these neurons must be densely interconnected with synapses that are modifiable early in the animal’s life. Second, we investigate whether plasticity during short bouts of locomotion could induce patterns of connections amongst grid cells or conjunctive cells. Finally, we run a simulation to test whether the learned connection patterns can exhibit linear look-ahead. Our results are straightforward. A simulated 30-min walk produces weak strengthening of synapses between grid cells that do not support linear look-ahead. Similar training in a conjunctive cell module produces a small subset of very strong connections between cells. These strong pairs have three properties: the pre- and post-synaptic cells have similar heading direction. The cell pairs have neighboring grid bumps. Finally, the spatial offset of firing bumps of the cell pair is in the direction of the common heading preference. Such a module can produce strong and accurate linear look-ahead starting in any location and extending in any direction. We speculate that this process may: (1) compute linear paths to goals; (2) update grid cell firing during navigation; and (3) stabilize the rigid modules of grid cells and conjunctive cells. PMID:22557948

  20. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    PubMed

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  1. Simple Expressions for the Design of Linear Tapers in Overmoded Corrugated Waveguides

    DOE PAGES

    Schaub, S. C.; Shapiro, M. A.; Temkin, R. J.

    2015-08-16

    In this paper, simple analytical formulae are presented for the design of linear tapers with very low mode conversion loss in overmoded corrugated waveguides. For tapers from waveguide radius a2 to a1, with a11a 2/λ. Here, λ is the wavelength of radiation. The fractional loss of the HE 11 mode in an optimized taper is 0.0293(a 2-a 1) 4/amore » $$2\\atop{1}$$1a$$2\\atop{2}$$. These formulae are accurate when a2≲2a 1. Slightly more complex formulae, accurate for a 2≤4a 1, are also presented in this paper. The loss in an overmoded corrugated linear taper is less than 1 % when a 2≤2.12a 1 and less than 0.1 % when a 2≤1.53a 1. The present analytic results have been benchmarked against a rigorous mode matching code and have been found to be very accurate. The results for linear tapers are compared with the analogous expressions for parabolic tapers. Finally, parabolic tapers may provide lower loss, but linear tapers with moderate values of a 2/a 1 may be attractive because of their simplicity of fabrication.« less

  2. Morphological Awareness and Children's Writing: Accuracy, Error, and Invention

    PubMed Central

    McCutchen, Deborah; Stull, Sara

    2014-01-01

    This study examined the relationship between children's morphological awareness and their ability to produce accurate morphological derivations in writing. Fifth-grade U.S. students (n = 175) completed two writing tasks that invited or required morphological manipulation of words. We examined both accuracy and error, specifically errors in spelling and errors of the sort we termed morphological inventions, which entailed inappropriate, novel pairings of stems and suffixes. Regressions were used to determine the relationship between morphological awareness, morphological accuracy, and spelling accuracy, as well as between morphological awareness and morphological inventions. Linear regressions revealed that morphological awareness uniquely predicted children's generation of accurate morphological derivations, regardless of whether or not accurate spelling was required. A logistic regression indicated that morphological awareness was also uniquely predictive of morphological invention, with higher morphological awareness increasing the probability of morphological invention. These findings suggest that morphological knowledge may not only assist children with spelling during writing, but may also assist with word production via generative experimentation with morphological rules during sentence generation. Implications are discussed for the development of children's morphological knowledge and relationships with writing. PMID:25663748

  3. Toward structure prediction of cyclic peptides.

    PubMed

    Yu, Hongtao; Lin, Yu-Shan

    2015-02-14

    Cyclic peptides are a promising class of molecules that can be used to target specific protein-protein interactions. A computational method to accurately predict their structures would substantially advance the development of cyclic peptides as modulators of protein-protein interactions. Here, we develop a computational method that integrates bias-exchange metadynamics simulations, a Boltzmann reweighting scheme, dihedral principal component analysis and a modified density peak-based cluster analysis to provide a converged structural description for cyclic peptides. Using this method, we evaluate the performance of a number of popular protein force fields on a model cyclic peptide. All the tested force fields seem to over-stabilize the α-helix and PPII/β regions in the Ramachandran plot, commonly populated by linear peptides and proteins. Our findings suggest that re-parameterization of a force field that well describes the full Ramachandran plot is necessary to accurately model cyclic peptides.

  4. Identifying Depressed Older Adults in Primary Care: A Secondary Analysis of a Multisite Randomized Controlled Trial

    PubMed Central

    Voils, Corrine I.; Olsen, Maren K.; Williams, John W.; for the IMPACT Study Investigators

    2008-01-01

    Objective: To determine whether a subset of depressive symptoms could be identified to facilitate diagnosis of depression in older adults in primary care. Method: Secondary analysis was conducted on 898 participants aged 60 years or older with major depressive disorder and/or dysthymic disorder (according to DSM-IV criteria) who participated in the Improving Mood–Promoting Access to Collaborative Treatment (IMPACT) study, a multisite, randomized trial of collaborative care for depression (recruitment from July 1999 to August 2001). Linear regression was used to identify a core subset of depressive symptoms associated with decreased social, physical, and mental functioning. The sensitivity and specificity, adjusting for selection bias, were evaluated for these symptoms. The sensitivity and specificity of a second subset of 4 depressive symptoms previously validated in a midlife sample was also evaluated. Results: Psychomotor changes, fatigue, and suicidal ideation were associated with decreased functioning and served as the core set of symptoms. Adjusting for selection bias, the sensitivity of these 3 symptoms was 0.012 and specificity 0.994. The sensitivity of the 4 symptoms previously validated in a midlife sample was 0.019 and specificity was 0.997. Conclusion: We identified 3 depression symptoms that were highly specific for major depressive disorder in older adults. However, these symptoms and a previously identified subset were too insensitive for accurate diagnosis. Therefore, we recommend a full assessment of DSM-IV depression criteria for accurate diagnosis. PMID:18311416

  5. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  6. Development and validation of a subject-specific finite element model of the functional spinal unit to predict vertebral strength.

    PubMed

    Lee, Chu-Hee; Landham, Priyan R; Eastell, Richard; Adams, Michael A; Dolan, Patricia; Yang, Lang

    2017-09-01

    Finite element models of an isolated vertebral body cannot accurately predict compressive strength of the spinal column because, in life, compressive load is variably distributed across the vertebral body and neural arch. The purpose of this study was to develop and validate a patient-specific finite element model of a functional spinal unit, and then use the model to predict vertebral strength from medical images. A total of 16 cadaveric functional spinal units were scanned and then tested mechanically in bending and compression to generate a vertebral wedge fracture. Before testing, an image processing and finite element analysis framework (SpineVox-Pro), developed previously in MATLAB using ANSYS APDL, was used to generate a subject-specific finite element model with eight-node hexahedral elements. Transversely isotropic linear-elastic material properties were assigned to vertebrae, and simple homogeneous linear-elastic properties were assigned to the intervertebral disc. Forward bending loading conditions were applied to simulate manual handling. Results showed that vertebral strengths measured by experiment were positively correlated with strengths predicted by the functional spinal unit finite element model with von Mises or Drucker-Prager failure criteria ( R 2  = 0.80-0.87), with areal bone mineral density measured by dual-energy X-ray absorptiometry ( R 2  = 0.54) and with volumetric bone mineral density from quantitative computed tomography ( R 2  = 0.79). Large-displacement non-linear analyses on all specimens did not improve predictions. We conclude that subject-specific finite element models of a functional spinal unit have potential to estimate the vertebral strength better than bone mineral density alone.

  7. Aptamer-Based Dual-Functional Probe for Rapid and Specific Counting and Imaging of MCF-7 Cells.

    PubMed

    Yang, Bin; Chen, Beibei; He, Man; Yin, Xiao; Xu, Chi; Hu, Bin

    2018-02-06

    Development of multimodal detection technologies for accurate diagnosis of cancer at early stages is in great demand. In this work, we report a novel approach using an aptamer-based dual-functional probe for rapid, sensitive, and specific counting and visualization of MCF-7 cells by inductively coupled plasma-mass spectrometry (ICP-MS) and fluorescence imaging. The probe consists of a recognition unit of aptamer to catch cancer cells specifically, a fluorescent dye (FAM) moiety for fluorescence resonance energy transfer (FRET)-based "off-on" fluorescence imaging as well as gold nanoparticles (Au NPs) tag for both ICP-MS quantification and fluorescence quenching. Due to the signal amplification effect and low spectral interference of Au NPs in ICP-MS, an excellent linearity and sensitivity were achieved. Accordingly, a limit of detection of 81 MCF-7 cells and a relative standard deviation of 5.6% (800 cells, n = 7) were obtained. The dynamic linear range was 2 × 10 2 to 1.2 × 10 4 cells, and the recoveries in human whole blood were in the range of 98-110%. Overall, the established method provides quantitative and visualized information on MCF-7 cells with a simple and rapid process and paves the way for a promising strategy for biomedical research and clinical diagnostics.

  8. Need total sulfur content? Use chemiluminescence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubala, S.W.; Campbell, D.N.; DiSanzo, F.P.

    Regulations issued by the United States Environmental Protection Agency require petroleum refineries to reduce or control the amount of total sulfur present in their refined products. These legislative requirements have led many refineries to search for online instrumentation that can produce accurate and repeatable total sulfur measurements within allowed levels. Several analytical methods currently exist to measure total sulfur content. They include X-ray fluorescence (XRF), microcoulometry, lead acetate tape, and pyrofluorescence techniques. Sulfur-specific chemiluminescence detection (SSCD) has recently received much attention due to its linearity, selectivity, sensitivity, and equimolar response. However, its use has been largely confined to the areamore » of gas chromatography. This article focuses on the special design considerations and analytical utility of an SSCD system developed to determine total sulfur content in gasoline. The system exhibits excellent linearity and selectivity, the ability to detect low minimum levels, and an equimolar response to various sulfur compounds. 2 figs., 2 tabs.« less

  9. Brain shift computation using a fully nonlinear biomechanical model.

    PubMed

    Wittek, Adam; Kikinis, Ron; Warfield, Simon K; Miller, Karol

    2005-01-01

    In the present study, fully nonlinear (i.e. accounting for both geometric and material nonlinearities) patient specific finite element brain model was applied to predict deformation field within the brain during the craniotomy-induced brain shift. Deformation of brain surface was used as displacement boundary conditions. Application of the computed deformation field to align (i.e. register) the preoperative images with the intraoperative ones indicated that the model very accurately predicts the displacements of gravity centers of the lateral ventricles and tumor even for very limited information about the brain surface deformation. These results are sufficient to suggest that nonlinear biomechanical models can be regarded as one possible way of complementing medical image processing techniques when conducting nonrigid registration. Important advantage of such models over the linear ones is that they do not require unrealistic assumptions that brain deformations are infinitesimally small and brain tissue stress-strain relationship is linear.

  10. Joint statistics of strongly correlated neurons via dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Deniz, Taşkın; Rotter, Stefan

    2017-06-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

  11. Feedback linearization for control of air breathing engines

    NASA Technical Reports Server (NTRS)

    Phillips, Stephen; Mattern, Duane

    1991-01-01

    The method of feedback linearization for control of the nonlinear nozzle and compressor components of an air breathing engine is presented. This method overcomes the need for a large number of scheduling variables and operating points to accurately model highly nonlinear plants. Feedback linearization also results in linear closed loop system performance simplifying subsequent control design. Feedback linearization is used for the nonlinear partial engine model and performance is verified through simulation.

  12. A Simple and Specific Stability- Indicating RP-HPLC Method for Routine Assay of Adefovir Dipivoxil in Bulk and Tablet Dosage Form.

    PubMed

    Darsazan, Bahar; Shafaati, Alireza; Mortazavi, Seyed Alireza; Zarghi, Afshin

    2017-01-01

    A simple and reliable stability-indicating RP-HPLC method was developed and validated for analysis of adefovir dipivoxil (ADV).The chromatographic separation was performed on a C 18 column using a mixture of acetonitrile-citrate buffer (10 mM at pH 5.2) 36:64 (%v/v) as mobile phase, at a flow rate of 1.5 mL/min. Detection was carried out at 260 nm and a sharp peak was obtained for ADV at a retention time of 5.8 ± 0.01 min. No interferences were observed from its stress degradation products. The method was validated according to the international guidelines. Linear regression analysis of data for the calibration plot showed a linear relationship between peak area and concentration over the range of 0.5-16 μg/mL; the regression coefficient was 0.9999and the linear regression equation was y = 24844x-2941.3. The detection (LOD) and quantification (LOQ) limits were 0.12 and 0.35 μg/mL, respectively. The results proved the method was fast (analysis time less than 7 min), precise, reproducible, and accurate for analysis of ADV over a wide range of concentration. The proposed specific method was used for routine quantification of ADV in pharmaceutical bulk and a tablet dosage form.

  13. Validated modified Lycopodium spore method development for standardisation of ingredients of an ayurvedic powdered formulation Shatavaryadi churna.

    PubMed

    Kumar, Puspendra; Jha, Shivesh; Naved, Tanveer

    2013-01-01

    Validated modified lycopodium spore method has been developed for simple and rapid quantification of herbal powdered drugs. Lycopodium spore method was performed on ingredients of Shatavaryadi churna, an ayurvedic formulation used as immunomodulator, galactagogue, aphrodisiac and rejuvenator. Estimation of diagnostic characters of each ingredient of Shatavaryadi churna individually was carried out. Microscopic determination, counting of identifying number, measurement of area, length and breadth of identifying characters were performed using Leica DMLS-2 microscope. The method was validated for intraday precision, linearity, specificity, repeatability, accuracy and system suitability, respectively. The method is simple, precise, sensitive, and accurate, and can be used for routine standardisation of raw materials of herbal drugs. This method gives the ratio of individual ingredients in the powdered drug so that any adulteration of genuine drug with its adulterant can be found out. The method shows very good linearity value between 0.988-0.999 for number of identifying character and area of identifying character. Percentage purity of the sample drug can be determined by using the linear equation of standard genuine drug.

  14. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    PubMed

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  15. Simultaneous prediction of binding free energy and specificity for PDZ domain-peptide interactions

    NASA Astrophysics Data System (ADS)

    Crivelli, Joseph J.; Lemmon, Gordon; Kaufmann, Kristian W.; Meiler, Jens

    2013-12-01

    Interactions between protein domains and linear peptides underlie many biological processes. Among these interactions, the recognition of C-terminal peptides by PDZ domains is one of the most ubiquitous. In this work, we present a mathematical model for PDZ domain-peptide interactions capable of predicting both affinity and specificity of binding based on X-ray crystal structures and comparative modeling with R osetta. We developed our mathematical model using a large phage display dataset describing binding specificity for a wild type PDZ domain and 91 single mutants, as well as binding affinity data for a wild type PDZ domain binding to 28 different peptides. Structural refinement was carried out through several R osetta protocols, the most accurate of which included flexible peptide docking and several iterations of side chain repacking and backbone minimization. Our findings emphasize the importance of backbone flexibility and the energetic contributions of side chain-side chain hydrogen bonds in accurately predicting interactions. We also determined that predicting PDZ domain-peptide interactions became increasingly challenging as the length of the peptide increased in the N-terminal direction. In the training dataset, predicted binding energies correlated with those derived through calorimetry and specificity switches introduced through single mutations at interface positions were recapitulated. In independent tests, our best performing protocol was capable of predicting dissociation constants well within one order of magnitude of the experimental values and specificity profiles at the level of accuracy of previous studies. To our knowledge, this approach represents the first integrated protocol for predicting both affinity and specificity for PDZ domain-peptide interactions.

  16. Detection of human papillomaviruses by polymerase chain reaction and ligation reaction on universal microarray.

    PubMed

    Ritari, Jarmo; Hultman, Jenni; Fingerroos, Rita; Tarkkanen, Jussi; Pullat, Janne; Paulin, Lars; Kivi, Niina; Auvinen, Petri; Auvinen, Eeva

    2012-01-01

    Sensitive and specific detection of human papillomaviruses (HPV) in cervical samples is a useful tool for the early diagnosis of epithelial neoplasia and anogenital lesions. Recent studies support the feasibility of HPV DNA testing instead of cytology (Pap smear) as a primary test in population screening for cervical cancer. This is likely to be an option in the near future in many countries, and it would increase the efficiency of screening for cervical abnormalities. We present here a microarray test for the detection and typing of 15 most important high-risk HPV types and two low risk types. The method is based on type specific multiplex PCR amplification of the L1 viral genomic region followed by ligation detection reaction where two specific ssDNA probes, one containing a fluorescent label and the other a flanking ZipCode sequence, are joined by enzymatic ligation in the presence of the correct HPV PCR product. Human beta-globin is amplified in the same reaction to control for sample quality and adequacy. The genotyping capacity of our approach was evaluated against Linear Array test using cervical samples collected in transport medium. Altogether 14 out of 15 valid samples (93%) gave concordant results between our test and Linear Array. One sample was HPV56 positive in our test and high-risk positive in Hybrid Capture 2 but remained negative in Linear Array. The preliminary results suggest that our test has accurate multiple HPV genotyping capability with the additional advantages of generic detection format, and potential for high-throughput screening.

  17. A validated ultra high-pressure liquid chromatography method for separation of candesartan cilexetil impurities and its degradents in drug product

    PubMed Central

    Kumar, Namala Durga Atchuta; Babu, K. Sudhakar; Gosada, Ullas; Sharma, Nitish

    2012-01-01

    Introduction: A selective, specific, and sensitive “Ultra High-Pressure Liquid Chromatography” (UPLC) method was developed for determination of candesartan cilexetil impurities as well asits degradent in tablet formulation. Materials and Methods: The chromatographic separation was performed on Waters Acquity UPLC system and BEH Shield RP18 column using gradient elution of mobile phase A and B. 0.01 M phosphate buffer adjusted pH 3.0 with Orthophosphoric acid was used as mobile phase A and 95% acetonitrile with 5% Milli Q Water was used as mobile phase B. Ultraviolet (UV) detection was performed at 254 nm and 210 nm, where (CDS-6), (CDS-5), (CDS-7), (Ethyl Candesartan), (Desethyl CCX), (N-Ethyl), (CCX-1), (1 N Ethyl Oxo CCX), (2 N Ethyl Oxo CCX), (2 N Ethyl) and any unknown impurity were monitored at 254 nm wavelength, and two process-related impurities, trityl alcohol and MTE impurity, were estimated at 210 nm. Candesartan cilexetil andimpurities were chromatographed with a total run time of 20 min. Results: Calibration showed that the response of impurity was a linear function of concentration over the range limit of quantification to 2 μg/mL (r2≥0.999) and the method was validated over this range for precision, intermediate precision, accuracy, linearity, and specificity. For the precision study, percentage relative standard deviation of each impurity was <15% (n=6). Conclusion: The method was found to be precise, accurate, linear, and specific. The proposed method was successfully employed for estimation of candesartan cilexetil impurities in pharmaceutical preparations. PMID:23781475

  18. TGIS, TIG, Program Development, Transportation & Public Facilities, State

    Science.gov Websites

    accessible, accurate, and controlled inventory of public roadway features and linear coordinates for the Roadway Data System (RDS) network (Alaska DOT&PF's Linear Reference System or LRS) to meet Federal and

  19. Angular scale expansion theory and the misperception of egocentric distance in locomotor space.

    PubMed

    Durgin, Frank H

    Perception is crucial for the control of action, but perception need not be scaled accurately to produce accurate actions. This paper reviews evidence for an elegant new theory of locomotor space perception that is based on the dense coding of angular declination so that action control may be guided by richer feedback. The theory accounts for why so much direct-estimation data suggests that egocentric distance is underestimated despite the fact that action measures have been interpreted as indicating accurate perception. Actions are calibrated to the perceived scale of space and thus action measures are typically unable to distinguish systematic (e.g., linearly scaled) misperception from accurate perception. Whereas subjective reports of the scaling of linear extent are difficult to evaluate in absolute terms, study of the scaling of perceived angles (which exist in a known scale, delimited by vertical and horizontal) provides new evidence regarding the perceptual scaling of locomotor space.

  20. Spatial Processes in Linear Ordering

    ERIC Educational Resources Information Center

    von Hecker, Ulrich; Klauer, Karl Christoph; Wolf, Lukas; Fazilat-Pour, Masoud

    2016-01-01

    Memory performance in linear order reasoning tasks (A > B, B > C, C > D, etc.) shows quicker, and more accurate responses to queries on wider (AD) than narrower (AB) pairs on a hypothetical linear mental model (A -- B -- C -- D). While indicative of an analogue representation, research so far did not provide positive evidence for spatial…

  1. A clinically applicable non-invasive method to quantitatively assess the visco-hyperelastic properties of human heel pad, implications for assessing the risk of mechanical trauma.

    PubMed

    Behforootan, Sara; Chatzistergos, Panagiotis E; Chockalingam, Nachiappan; Naemi, Roozbeh

    2017-04-01

    Pathological conditions such as diabetic foot and plantar heel pain are associated with changes in the mechanical properties of plantar soft tissue. However, the causes and implications of these changes are not yet fully understood. This is mainly because accurate assessment of the mechanical properties of plantar soft tissue in the clinic remains extremely challenging. To develop a clinically viable non-invasive method of assessing the mechanical properties of the heel pad. Furthermore the effect of non-linear mechanical behaviour of the heel pad on its ability to uniformly distribute foot-ground contact loads in light of the effect of overloading is also investigated. An automated custom device for ultrasound indentation was developed along with custom algorithms for the automated subject-specific modeling of heel pad. Non-time-dependent and time-dependent material properties were inverse engineered from results from quasi-static indentation and stress relaxation test respectively. The validity of the calculated coefficients was assessed for five healthy participants. The implications of altered mechanical properties on the heel pad's ability to uniformly distribute plantar loading were also investigated in a parametric analysis. The subject-specific heel pad models with coefficients calculated based on quasi-static indentation and stress relaxation were able to accurately simulate dynamic indentation. Average error in the predicted forces for maximum deformation was only 6.6±4.0%. When the inverse engineered coefficients were used to simulate the first instance of heel strike the error in terms of peak plantar pressure was 27%. The parametric analysis indicated that the heel pad's ability to uniformly distribute plantar loads is influenced both by its overall deformability and by its stress-strain behaviour. When overall deformability stays constant, changes in stress/strain behaviour leading to a more "linear" mechanical behaviour appear to improve the heel pad's ability to uniformly distribute plantar loading. The developed technique can accurately assess the visco-hyperelastic behaviour of heel pad. It was observed that specific change in stress-strain behaviour can enhance/weaken the heel pad's ability to uniformly distribute plantar loading that will increase/decrease the risk for overloading and trauma. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Analytical Validation of Accelerator Mass Spectrometry for Pharmaceutical Development: the Measurement of Carbon-14 Isotope Ratio.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keck, B D; Ognibene, T; Vogel, J S

    2010-02-05

    Accelerator mass spectrometry (AMS) is an isotope based measurement technology that utilizes carbon-14 labeled compounds in the pharmaceutical development process to measure compounds at very low concentrations, empowers microdosing as an investigational tool, and extends the utility of {sup 14}C labeled compounds to dramatically lower levels. It is a form of isotope ratio mass spectrometry that can provide either measurements of total compound equivalents or, when coupled to separation technology such as chromatography, quantitation of specific compounds. The properties of AMS as a measurement technique are investigated here, and the parameters of method validation are shown. AMS, independent of anymore » separation technique to which it may be coupled, is shown to be accurate, linear, precise, and robust. As the sensitivity and universality of AMS is constantly being explored and expanded, this work underpins many areas of pharmaceutical development including drug metabolism as well as absorption, distribution and excretion of pharmaceutical compounds as a fundamental step in drug development. The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of {sup 14}C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the {sup 14}C label), stable across samples storage conditions for at least one year, linear over 4 orders of magnitude with an analytical range from one tenth Modern to at least 2000 Modern (instrument specific). Further, accuracy was excellent between 1 and 3 percent while precision expressed as coefficient of variation is between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of carbon-14 (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with {sup 14}C corresponds to 30 fg equivalents. AMS provides an sensitive, accurate and precise method of measuring drug compounds in biological matrices.« less

  3. [Detection of Plasmodium falciparum by using magnetic nanoparticles separation-based quantitative real-time PCR assay].

    PubMed

    Wang, Fei; Tian, Yin; Yang, Jing; Sun, Fu-Jun; Sun, Ning; Liu, Bi-Yong; Tian, Rui; Ge, Guang-Lu; Zou, Ming-qiang; Deng, Cong-liang; Liu, Yi

    2014-10-01

    To establish a magnetic nanoparticles separation-based quantitative real-time PCR (RT-PCR) assay for fast and accurate detection of Plasmodium falciparum and providing a technical support for improving the control and prevention of imported malaria. According to the conserved sequences of the P. falciparum genome 18SrRNA, the species-specific primers and probe were designed and synthetized. The RT-PCR was established by constructing the plasmid standard, fitting the standard curve and using magnetic nanoparticles separation. The sensitivity and specificity of the assay were evaluated. The relationship between the threshold cycle (Ct) and logarithm of initial templates copies was linear over a range of 2.5 x 10(1) to 2.5 x 10(8) copies/μl (R2 = 0.999). Among 13 subjects of entry frontier, a P. falciparum carrier with low load was detected by using the assay and none was detected with the conventional examinations (microscopic examinations and rapid tests). This assay shows a high sensitivity in detection of P. falciparum, with rapid and accurate characteristics, and is especially useful in diagnosis of P. falciparum infectors with low parasitaemia at entry-exit frontier ports.

  4. Modeling the interactions between a prosthetic socket, polyurethane liners and the residual limb in transtibial amputees using non-linear finite element analysis.

    PubMed

    Simpson, G; Fisher, C; Wright, D K

    2001-01-01

    Continuing earlier studies into the relationship between the residual limb, liner and socket in transtibial amputees, we describe a geometrically accurate non-linear model simulating the donning of a liner and then a socket. The socket is rigid and rectified and the liner is a polyurethane geltype which is accurately described using non-linear (Mooney-Rivlin) material properties. The soft tissue of the residual limb is modelled as homogeneous, non-linear and hyperelastic and the bone structure within the residual limb is taken as rigid. The work gives an indication of how the stress induced by the process of donning the rigid socket is redistributed by the liner. Ultimately we hope to understand how the liner design might be modified to reduce discomfort. The ANSYS finite element code, version 5.6 is used.

  5. Gait event detection using linear accelerometers or angular velocity transducers in able-bodied and spinal-cord injured individuals.

    PubMed

    Jasiewicz, Jan M; Allum, John H J; Middleton, James W; Barriskill, Andrew; Condie, Peter; Purcell, Brendan; Li, Raymond Che Tin

    2006-12-01

    We report on three different methods of gait event detection (toe-off and heel strike) using miniature linear accelerometers and angular velocity transducers in comparison to using standard pressure-sensitive foot switches. Detection was performed with normal and spinal-cord injured subjects. The detection of end contact (EC), normally toe-off, and initial contact (IC) normally, heel strike was based on either foot linear accelerations or foot sagittal angular velocity or shank sagittal angular velocity. The results showed that all three methods were as accurate as foot switches in estimating times of IC and EC for normal gait patterns. In spinal-cord injured subjects, shank angular velocity was significantly less accurate (p<0.02). We conclude that detection based on foot linear accelerations or foot angular velocity can correctly identify the timing of IC and EC events in both normal and spinal-cord injured subjects.

  6. Simultaneous determination of related substances of telmisartan and hydrochlorothiazide in tablet dosage form by using reversed phase high performance liquid chromatographic method

    PubMed Central

    Mukhopadhyay, Sutirtho; Kadam, Kiran; Sawant, Laxman; Nachane, Dhanashree; Pandita, Nancy

    2011-01-01

    Objective: Telmisartan is a potent, long-lasting, nonpeptide antagonist of the angiotensin II type-1 (AT1) receptor that is indicated for the treatment of essential hypertension. Hydrochlorothiazide is a widely prescribed diuretic and it is indicated for the treatment of edema, control of essential hypertension and management of diabetes insipidus. In the current article a new, accurate, sensitive, precise, rapid, reversed phase high performance liquid chromatography (RP-HPLC) method was developed for determination of related substances of Telmisartan and Hydrochlorthiazide in tablet dosage form. Materials and Methods: Simultaneous determination of related substances was performed on Kromasil C18 analytical column (250 × 4.6 mm; 5μm pertical size) column at 40°C employing a gradient elution. Mobile phase consisting of solvent A (solution containing 2.0 g of potassium dihydrogen phosphate anhydrous and 1.04 g of Sodium 1- Hexane sulphonic acid monohydrate per liter of water, adjusted to pH 3.0 with orthophosphoric acid) and solvent B (mixture of Acetonitrile: Methanol in the ratio 80:20 v/v) was used at a flow rate of 1.0 ml min–1. UV detection was performed at 270 nm. Results: During method validation parameter such as precision, linearity, accuracy, specificity, limit of detection and quantification were evaluated, which remained within acceptable limits. Conclusions: HPLC analytical method is linear, accurate, precise, robust and specific, being able to separate the main drug from its degradation products. It may find application for the routine analysis of the related substances of both Telmisartan and Hydrochlorthiazide in this combination tablets. PMID:21966158

  7. Evaluation of a wireless activity monitoring system to quantify locomotor activity in horses in experimental settings.

    PubMed

    Fries, M; Montavon, S; Spadavecchia, C; Levionnois, O L

    2017-03-01

    Methods of evaluating locomotor activity can be useful in efforts to quantify behavioural activity in horses objectively. To evaluate whether an accelerometric device would be adequate to quantify locomotor activity and step frequency in horses, and to distinguish between different levels of activity and different gaits. Observational study in an experimental setting. Dual-mode (activity and step count) piezo-electric accelerometric devices were placed at each of 4 locations (head, withers, forelimb and hindlimb) in each of 6 horses performing different controlled activities including grazing, walking at different speeds, trotting and cantering. Both the activity count and step count were recorded and compared by the various activities. Statistical analyses included analysis of variance for repeated measures, receiver operating characteristic curves, Bland-Altman analysis and linear regression. The accelerometric device was able to quantify locomotor activity at each of the 4 locations investigated and to distinguish between gaits and speeds. The activity count recorded by the accelerometer placed on the hindlimb was the most accurate, displaying a clear discrimination between the different levels of activity and a linear correlation to speed. The accelerometer placed on the head was the only one to distinguish specifically grazing behaviour from standing. The accelerometer placed on the withers was unable to differentiate different gaits and activity levels. The step count function measured at the hindlimb was reliable but the count was doubled at the walk. The dual-mode accelerometric device was sufficiently accurate to quantify and compare locomotor activity in horses moving at different speeds and gaits. Positioning the device on the hindlimb allowed for the most accurate results. The step count function can be useful but must be manually corrected, especially at the walk. © 2016 EVJ Ltd.

  8. Single Droplet Combustion of Decane in Microgravity: Experiments and Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Dietrich, D. L.; Struk, P. M.; Ikegam, M.; Xu, G.

    2004-01-01

    This paper presents experimental data on single droplet combustion of decane in microgravity and compares the results to a numerical model. The primary independent experiment variables are the ambient pressure and oxygen mole fraction, pressure, droplet size (over a relatively small range) and ignition energy. The droplet history (D(sup 2) history) is non-linear with the burning rate constant increasing throughout the test. The average burning rate constant, consistent with classical theory, increased with increasing ambient oxygen mole fraction and was nearly independent of pressure, initial droplet size and ignition energy. The flame typically increased in size initially, and then decreased in size, in response to the shrinking droplet. The flame standoff increased linearly for the majority of the droplet lifetime. The flame surrounding the droplet extinguished at a finite droplet size at lower ambient pressures and an oxygen mole fraction of 0.15. The extinction droplet size increased with decreasing pressure. The model is transient and assumes spherical symmetry, constant thermo-physical properties (specific heat, thermal conductivity and species Lewis number) and single step chemistry. The model includes gas-phase radiative loss and a spherically symmetric, transient liquid phase. The model accurately predicts the droplet and flame histories of the experiments. Good agreement requires that the ignition in the experiment be reasonably approximated in the model and that the model accurately predict the pre-ignition vaporization of the droplet. The model does not accurately predict the dependence of extinction droplet diameter on pressure, a result of the simplified chemistry in the model. The transient flame behavior suggests the potential importance of fuel vapor accumulation. The model results, however, show that the fractional mass consumption rate of fuel in the flame relative to fuel vaporized is close to 1.0 for all but the lowest ambient oxygen mole fractions.

  9. Dosimetric verification of radiation therapy including intensity modulated treatments, using an amorphous-silicon electronic portal imaging device

    NASA Astrophysics Data System (ADS)

    Chytyk-Praznik, Krista Joy

    Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (˜1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients' delivered radiation treatments.

  10. Correlation of the turbo-MP RIA with ImmunoCAP FEIA for determination of food allergen-specific immunoglobulin E.

    PubMed

    Kontis, Kris J; Valcour, Andre; Patel, Ashok; Chen, Andy; Wang, Jan; Chow, Julia; Nayak, Narayan

    2006-01-01

    It has been reported that in vitro measurement of food-specific IgE can be used to accurately predict food allergy and reduce the risk associated with double-blinded placebo-controlled food challenges (DBPCFC). Our objective was to assess the performance characteristics of the Hycor Turbo-MP quantitative radioimmunoassay for food-specific IgE and to determine this method's comparability to another assay, the Pharmacia ImmunoCAP fluorescence enzyme immunoassay (FEIA). The dynamic range of the Turbo-MP assay is 0.05 to 100 IU/ml, compared to 0.35 to 100 IU/ml for the FEIA. Performance characteristics of the Turbo-MP assay (ie, reproducibility of the calibration curve, within-run precision, total precision, parallelism, and linearity) were determined using samples from the Hycor serum bank. The precision (CV) of IgE calibrator replicates was <10%. The total precision (CV) of the Turbo-MP assay ranged from 8.8% to 18.4% for specific IgE concentrations between 0.28 to 31.4 IU/ml. Testing of serial dilutions of sera with IgE specificities for egg white, cow's milk, codfish, wheat, peanut, and soybean showed that the assay is linear over the entire dynamic range. Serial dilution data (slopes of 1.01 to 1.10) showed parallelism to serial dilutions of the IgE calibrator (slope of 0.96). The Turbo-MP and FEIA methods were both used for quantitative assays of food-specific IgE in 457 serum samples obtained from a clinical reference laboratory. Comparison of specific IgE results by the Turbo-MP and FEIA methods for 6 major food allergens exhibited a slope of 0.99 (0.92 to 1.03) with a correlation coefficient of 0.81.

  11. A reactive, scalable, and transferable model for molecular energies from a neural network approach based on local information

    NASA Astrophysics Data System (ADS)

    Unke, Oliver T.; Meuwly, Markus

    2018-06-01

    Despite the ever-increasing computer power, accurate ab initio calculations for large systems (thousands to millions of atoms) remain infeasible. Instead, approximate empirical energy functions are used. Most current approaches are either transferable between different chemical systems, but not particularly accurate, or they are fine-tuned to a specific application. In this work, a data-driven method to construct a potential energy surface based on neural networks is presented. Since the total energy is decomposed into local atomic contributions, the evaluation is easily parallelizable and scales linearly with system size. With prediction errors below 0.5 kcal mol-1 for both unknown molecules and configurations, the method is accurate across chemical and configurational space, which is demonstrated by applying it to datasets from nonreactive and reactive molecular dynamics simulations and a diverse database of equilibrium structures. The possibility to use small molecules as reference data to predict larger structures is also explored. Since the descriptor only uses local information, high-level ab initio methods, which are computationally too expensive for large molecules, become feasible for generating the necessary reference data used to train the neural network.

  12. Hyoid bone development: An assessment of optimal CT scanner parameters and 3D volume rendering techniques

    PubMed Central

    Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.

    2015-01-01

    The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349

  13. Hyoid Bone Development: An Assessment Of Optimal CT Scanner Parameters and Three-Dimensional Volume Rendering Techniques.

    PubMed

    Cotter, Meghan M; Whyms, Brian J; Kelly, Michael P; Doherty, Benjamin M; Gentry, Lindell R; Bersu, Edward T; Vorperian, Houri K

    2015-08-01

    The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared with corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. © 2015 Wiley Periodicals, Inc.

  14. Accuracy and efficiency of published film dosimetry techniques using a flat-bed scanner and EBT3 film.

    PubMed

    Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T

    2018-03-01

    Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.

  15. Discovering Optimum Method to Extract Depth Information for Nearshore Coastal Waters from SENTINEL-2A - Case Study: Nayband Bay, Iran

    NASA Astrophysics Data System (ADS)

    Kabiri, K.

    2017-09-01

    The capabilities of Sentinel-2A imagery to determine bathymetric information in shallow coastal waters were examined. In this regard, two Sentinel-2A images (acquired on February and March 2016 in calm weather and relatively low turbidity) were selected from Nayband Bay, located in the northern Persian Gulf. In addition, a precise and accurate bathymetric map for the study area were obtained and used for both calibrating the models and validating the results. Traditional linear and ratio transform techniques, as well as a novel integrated method, were employed to determine depth values. All possible combinations of the three bands (Band 2: blue (458-523 nm), Band 3: green (543-578 nm), and Band 4: red (650-680 nm), spatial resolution: 10 m) have been considered (11 options) using the traditional linear and ratio transform techniques, together with 10 model options for the integrated method. The accuracy of each model was assessed by comparing the determined bathymetric information with field measured values. The correlation coefficients (R2), and root mean square errors (RMSE) for validation points were calculated for all models and for two satellite images. When compared with the linear transform method, the method employing ratio transformation with a combination of all three bands yielded more accurate results (R2Mac = 0.795, R2Feb = 0.777, RMSEMac = 1.889 m, and RMSEFeb =2.039 m). Although most of the integrated transform methods (specifically the method including all bands and band ratios) have yielded the highest accuracy, these increments were not significant, hence the ratio transformation has selected as optimum method.

  16. Coherent tools for physics-based simulation and characterization of noise in semiconductor devices oriented to nonlinear microwave circuit CAD

    NASA Astrophysics Data System (ADS)

    Riah, Zoheir; Sommet, Raphael; Nallatamby, Jean C.; Prigent, Michel; Obregon, Juan

    2004-05-01

    We present in this paper a set of coherent tools for noise characterization and physics-based analysis of noise in semiconductor devices. This noise toolbox relies on a low frequency noise measurement setup with special high current capabilities thanks to an accurate and original calibration. It relies also on a simulation tool based on the drift diffusion equations and the linear perturbation theory, associated with the Green's function technique. This physics-based noise simulator has been implemented successfully in the Scilab environment and is specifically dedicated to HBTs. Some results are given and compared to those existing in the literature.

  17. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  18. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  20. Topology optimized and 3D printed polymer-bonded permanent magnets for a predefined external field

    NASA Astrophysics Data System (ADS)

    Huber, C.; Abert, C.; Bruckner, F.; Pfaff, C.; Kriwet, J.; Groenefeld, M.; Teliban, I.; Vogler, C.; Suess, D.

    2017-08-01

    Topology optimization offers great opportunities to design permanent magnetic systems that have specific external field characteristics. Additive manufacturing of polymer-bonded magnets with an end-user 3D printer can be used to manufacture permanent magnets with structures that had been difficult or impossible to manufacture previously. This work combines these two powerful methods to design and manufacture permanent magnetic systems with specific properties. The topology optimization framework is simple, fast, and accurate. It can also be used for the reverse engineering of permanent magnets in order to find the topology from field measurements. Furthermore, a magnetic system that generates a linear external field above the magnet is presented. With a volume constraint, the amount of magnetic material can be minimized without losing performance. Simulations and measurements of the printed systems show very good agreement.

  1. Simultaneous determination of some cholesterol-lowering drugs in their binary mixture by novel spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam Mahmoud; Hegazy, Maha Abdel Monem

    2013-09-01

    Four simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of simvastatin (SM) and ezetimibe (EZ) namely; extended ratio subtraction (EXRSM), simultaneous ratio subtraction (SRSM), ratio difference (RDSM) and absorption factor (AFM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined, and the methods were validated and the specificity was assessed by analyzing synthetic mixtures containing the cited drugs. The four methods were applied for the determination of the cited drugs in tablets and the obtained results were statistically compared with each other and with those of a reported HPLC method. The comparison showed that there is no significant difference between the proposed methods and the reported method regarding both accuracy and precision.

  2. The simultaneous detection of free and total prostate antigen in serum samples with high sensitivity and specificity by using the dual-channel surface plasmon resonance.

    PubMed

    Jiang, Zhongxiu; Qin, Yun; Peng, Zhen; Chen, Shenghua; Chen, Shu; Deng, Chunyan; Xiang, Juan

    2014-12-15

    Free/total prostate antigen (f/t-PSA) ratio in serum as a promising parameter has been used to improve the differentiation of benign and malignant prostate disease. In order to obtain the accurate and reliable f/t-PSA ratio, the simultaneous detection of f-PSA and t-PSA with high sensitivity and specificity is required. In this work, the dual-channel surface plasmon resonance (SPR) has been employed to meet the requirement. In one channel, t-PSA was directly measured with a linear range from 1.0 to 20.0 ng/mL. In another channel, due to the low concentration of f-PSA in serum, the asynchronous competitive inhibition immunoassay with f-PSA@Au nanoparticles (AuNPs) was developed. As expected, the detection sensitivity of f-PSA was greatly enhanced, and a linear correlation with wider linear range from 0.010 to 0.40 ng/mL was also achieved. On the other hand, a simple method was explored for significantly reducing the non-specific adsorption of co-existing proteins. On basis of this, the f/t-PSA ratios in serum samples from prostate cancer (PCa) or benign prostatic hyperplasia (BPH) patients were measured. And it was found that there was significant difference between the distributions of f/t-PSA ratio in BPH patients (16.44±1.77%) and those in PCa patients (24.53±4.97%). This present work provides an effective method for distinguishing PCa from BPH, which lays a potential foundation for the early diagnosis of PCa. Copyright © 2014. Published by Elsevier B.V.

  3. Developmental models for estimating ecological responses to environmental variability: structural, parametric, and experimental issues.

    PubMed

    Moore, Julia L; Remais, Justin V

    2014-03-01

    Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.

  4. Plant uptake of elements in soil and pore water: field observations versus model assumptions.

    PubMed

    Raguž, Veronika; Jarsjö, Jerker; Grolander, Sara; Lindborg, Regina; Avila, Rodolfo

    2013-09-15

    Contaminant concentrations in various edible plant parts transfer hazardous substances from polluted areas to animals and humans. Thus, the accurate prediction of plant uptake of elements is of significant importance. The processes involved contain many interacting factors and are, as such, complex. In contrast, the most common way to currently quantify element transfer from soils into plants is relatively simple, using an empirical soil-to-plant transfer factor (TF). This practice is based on theoretical assumptions that have been previously shown to not generally be valid. Using field data on concentrations of 61 basic elements in spring barley, soil and pore water at four agricultural sites in mid-eastern Sweden, we quantify element-specific TFs. Our aim is to investigate to which extent observed element-specific uptake is consistent with TF model assumptions and to which extent TF's can be used to predict observed differences in concentrations between different plant parts (root, stem and ear). Results show that for most elements, plant-ear concentrations are not linearly related to bulk soil concentrations, which is congruent with previous studies. This behaviour violates a basic TF model assumption of linearity. However, substantially better linear correlations are found when weighted average element concentrations in whole plants are used for TF estimation. The highest number of linearly-behaving elements was found when relating average plant concentrations to soil pore-water concentrations. In contrast to other elements, essential elements (micronutrients and macronutrients) exhibited relatively small differences in concentration between different plant parts. Generally, the TF model was shown to work reasonably well for micronutrients, whereas it did not for macronutrients. The results also suggest that plant uptake of elements from sources other than the soil compartment (e.g. from air) may be non-negligible. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  6. Transient Vibration Prediction for Rotors on Ball Bearings Using Load-dependent Non-linear Bearing Stiffness

    NASA Technical Reports Server (NTRS)

    Fleming, David P.; Poplawski, J. V.

    2002-01-01

    Rolling-element bearing forces vary nonlinearly with bearing deflection. Thus an accurate rotordynamic transient analysis requires bearing forces to be determined at each step of the transient solution. Analyses have been carried out to show the effect of accurate bearing transient forces (accounting for non-linear speed and load dependent bearing stiffness) as compared to conventional use of average rolling-element bearing stiffness. Bearing forces were calculated by COBRA-AHS (Computer Optimized Ball and Roller Bearing Analysis - Advanced High Speed) and supplied to the rotordynamics code ARDS (Analysis of Rotor Dynamic Systems) for accurate simulation of rotor transient behavior. COBRA-AHS is a fast-running 5 degree-of-freedom computer code able to calculate high speed rolling-element bearing load-displacement data for radial and angular contact ball bearings and also for cylindrical and tapered roller beatings. Results show that use of nonlinear bearing characteristics is essential for accurate prediction of rotordynamic behavior.

  7. Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong; Malik, Waqar; Jung, Yoon C.

    2016-01-01

    Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.

  8. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  9. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  10. Integrated circuits for accurate linear analogue electric signal processing

    NASA Astrophysics Data System (ADS)

    Huijsing, J. H.

    1981-11-01

    The main lines in the design of integrated circuits for accurate analog linear electric signal processing in a frequency range including DC are investigated. A categorization of universal active electronic devices is presented on the basis of the connections of one of the terminals of the input and output ports to the common ground potential. The means for quantifying the attributes of four types of universal active electronic devices are included. The design of integrated operational voltage amplifiers (OVA) is discussed. Several important applications in the field of general instrumentation are numerically evaluated, and the design of operatinal floating amplifiers is presented.

  11. Simplified biased random walk model for RecA-protein-mediated homology recognition offers rapid and accurate self-assembly of long linear arrays of binding sites

    NASA Astrophysics Data System (ADS)

    Kates-Harbeck, Julian; Tilloy, Antoine; Prentiss, Mara

    2013-07-01

    Inspired by RecA-protein-based homology recognition, we consider the pairing of two long linear arrays of binding sites. We propose a fully reversible, physically realizable biased random walk model for rapid and accurate self-assembly due to the spontaneous pairing of matching binding sites, where the statistics of the searched sample are included. In the model, there are two bound conformations, and the free energy for each conformation is a weakly nonlinear function of the number of contiguous matched bound sites.

  12. The Lyα forest and the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Meiksin, Avery

    2016-10-01

    The accurate description of the properties of the Lyman-α forest is a spectacular success of the Cold Dark Matter theory of cosmological structure formation. After a brief review of early models, it is shown how numerical simulations have demonstrated the Lyman-α forest emerges from the cosmic web in the quasi-linear regime of overdensity. The quasi-linear nature of the structures allows accurate modeling, providing constraints on cosmological models over a unique range of scales and enabling the Lyman-α forest to serve as a bridge to the more complex problem of galaxy formation.

  13. On the identifiability of inertia parameters of planar Multi-Body Space Systems

    NASA Astrophysics Data System (ADS)

    Nabavi-Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2018-04-01

    This work describes a new formulation to study the identifiability characteristics of Serially Linked Multi-body Space Systems (SLMBSS). The process exploits the so called "Lagrange Formulation" to develop a linear form of Equations of Motion w.r.t the system Inertia Parameters (IPs). Having developed a specific form of regressor matrix, we aim to expedite the identification process. The new approach allows analytical as well as numerical identification and identifiability analysis for different SLMBSSs' configurations. Moreover, the explicit forms of SLMBSSs identifiable parameters are derived by analyzing the identifiability characteristics of the robot. We further show that any SLMBSS designed with Variable Configurations Joint allows all IPs to be identifiable through comparing two successive identification outcomes. This feature paves the way to design new class of SLMBSS for which accurate identification of all IPs is at hand. Different case studies reveal that proposed formulation provides fast and accurate results, as required by the space applications. Further studies might be necessary for cases where planar-body assumption becomes inaccurate.

  14. Evaluation of indirect impedance for measuring microbial growth in complex food matrices.

    PubMed

    Johnson, N; Chang, Z; Bravo Almeida, C; Michel, M; Iversen, C; Callanan, M

    2014-09-01

    The suitability of indirect impedance to accurately measure microbial growth in real food matrices was investigated. A variety of semi-solid and liquid food products were inoculated with Bacillus cereus, Listeria monocytogenes, Staphylococcus aureus, Lactobacillus plantarum, Pseudomonas aeruginosa, Escherichia coli, Salmonella enteriditis, Candida tropicalis or Zygosaccharomyces rouxii and CO2 production was monitored using a conductimetric (Don Whitely R.A.B.I.T.) system. The majority (80%) of food and microbe combinations produced a detectable growth signal. The linearity of conductance responses in selected food products was investigated and a good correlation (R(2) ≥ 0.84) was observed between inoculum levels and times to detection. Specific growth rate estimations from the data were sufficiently accurate for predictive modeling in some cases. This initial evaluation of the suitability of indirect impedance to generate microbial growth data in complex food matrices indicates significant potential for the technology as an alternative to plating methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Calculation of protein-ligand binding affinities.

    PubMed

    Gilson, Michael K; Zhou, Huan-Xiang

    2007-01-01

    Accurate methods of computing the affinity of a small molecule with a protein are needed to speed the discovery of new medications and biological probes. This paper reviews physics-based models of binding, beginning with a summary of the changes in potential energy, solvation energy, and configurational entropy that influence affinity, and a theoretical overview to frame the discussion of specific computational approaches. Important advances are reported in modeling protein-ligand energetics, such as the incorporation of electronic polarization and the use of quantum mechanical methods. Recent calculations suggest that changes in configurational entropy strongly oppose binding and must be included if accurate affinities are to be obtained. The linear interaction energy (LIE) and molecular mechanics Poisson-Boltzmann surface area (MM-PBSA) methods are analyzed, as are free energy pathway methods, which show promise and may be ready for more extensive testing. Ultimately, major improvements in modeling accuracy will likely require advances on multiple fronts, as well as continued validation against experiment.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, K; Li, X; Liu, B

    Purpose: To accurately measure CT bow-tie profiles from various manufacturers and to provide non-proprietary information for CT system modeling. Methods: A GOS-based linear detector (0.8 mm per pixel and 51.2 cm in length) with a fast data sampling speed (0.24 ms/sample) was used to measure the relative profiles of bow-tie filters from a collection of eight CT scanners by three different vendors, GE (LS Xtra, LS VCT, Discovery HD750), Siemens (Sensation 64, Edge, Flash, Force), and Philips (iBrilliance 256). The linear detector was first calibrated for its energy response within typical CT beam quality ranges and compared with an ionmore » chamber and analytical modeling (SPECTRA and TASMIP). A geometrical calibration process was developed to determine key parameters including the distance from the focal spot to the linear detector, the angular increment of the gantry at each data sampling, the location of the central x-ray on the linear detector, and the angular response of the detector pixel. Measurements were performed under axial-scan modes for most representative bow-tie filters and kV selections from each scanner. Bow-tie profiles were determined by re-binning the measured rotational data with an angular accuracy of 0.1 degree using the calibrated geometrical parameters. Results: The linear detector demonstrated an energy response as a solid state detector, which is close to the CT imaging detector. The geometrical calibration was proven to be sufficiently accurate (< 1mm in error for distances >550 mm) and the bow-tie profiles measured from rotational mode matched closely to those from the gantry-stationary mode. Accurate profiles were determined for a total of 21 bow-tie filters and 83 filter/kV combinations from the abovementioned scanner models. Conclusion: A new improved approach of CT bow-tie measurement was proposed and accurate bow-tie profiles were provided for a broad list of CT scanner models.« less

  17. Evaluation of empirical rule of linearly correlated peptide selection (ERLPS) for proteotypic peptide-based quantitative proteomics.

    PubMed

    Liu, Kehui; Zhang, Jiyang; Fu, Bin; Xie, Hongwei; Wang, Yingchun; Qian, Xiaohong

    2014-07-01

    Precise protein quantification is essential in comparative proteomics. Currently, quantification bias is inevitable when using proteotypic peptide-based quantitative proteomics strategy for the differences in peptides measurability. To improve quantification accuracy, we proposed an "empirical rule for linearly correlated peptide selection (ERLPS)" in quantitative proteomics in our previous work. However, a systematic evaluation on general application of ERLPS in quantitative proteomics under diverse experimental conditions needs to be conducted. In this study, the practice workflow of ERLPS was explicitly illustrated; different experimental variables, such as, different MS systems, sample complexities, sample preparations, elution gradients, matrix effects, loading amounts, and other factors were comprehensively investigated to evaluate the applicability, reproducibility, and transferability of ERPLS. The results demonstrated that ERLPS was highly reproducible and transferable within appropriate loading amounts and linearly correlated response peptides should be selected for each specific experiment. ERLPS was used to proteome samples from yeast to mouse and human, and in quantitative methods from label-free to O18/O16-labeled and SILAC analysis, and enabled accurate measurements for all proteotypic peptide-based quantitative proteomics over a large dynamic range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Height and Weight Estimation From Anthropometric Measurements Using Machine Learning Regressions

    PubMed Central

    Fernandes, Bruno J. T.; Roque, Alexandre

    2018-01-01

    Height and weight are measurements explored to tracking nutritional diseases, energy expenditure, clinical conditions, drug dosages, and infusion rates. Many patients are not ambulant or may be unable to communicate, and a sequence of these factors may not allow accurate estimation or measurements; in those cases, it can be estimated approximately by anthropometric means. Different groups have proposed different linear or non-linear equations which coefficients are obtained by using single or multiple linear regressions. In this paper, we present a complete study of the application of different learning models to estimate height and weight from anthropometric measurements: support vector regression, Gaussian process, and artificial neural networks. The predicted values are significantly more accurate than that obtained with conventional linear regressions. In all the cases, the predictions are non-sensitive to ethnicity, and to gender, if more than two anthropometric parameters are analyzed. The learning model analysis creates new opportunities for anthropometric applications in industry, textile technology, security, and health care. PMID:29651366

  19. Linear LIDAR versus Geiger-mode LIDAR: impact on data properties and data quality

    NASA Astrophysics Data System (ADS)

    Ullrich, A.; Pfennigbauer, M.

    2016-05-01

    LIDAR has become the inevitable technology to provide accurate 3D data fast and reliably even in adverse measurement situations and harsh environments. It provides highly accurate point clouds with a significant number of additional valuable attributes per point. LIDAR systems based on Geiger-mode avalanche photo diode arrays, also called single photon avalanche photo diode arrays, earlier employed for military applications, now seek to enter the commercial market of 3D data acquisition, advertising higher point acquisition speeds from longer ranges compared to conventional techniques. Publications pointing out the advantages of these new systems refer to the other category of LIDAR as "linear LIDAR", as the prime receiver element for detecting the laser echo pulses - avalanche photo diodes - are used in a linear mode of operation. We analyze the differences between the two LIDAR technologies and the fundamental differences in the data they provide. The limitations imposed by physics on both approaches to LIDAR are also addressed and advantages of linear LIDAR over the photon counting approach are discussed.

  20. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  1. A Novel Blast-mitigation Concept for Light Tactical Vehicles

    DTIC Science & Technology

    2013-01-01

    analysis which utilizes the mass and energy (but not linear momentum ) conservation equations is provided. It should be noted that the identical final...results could be obtained using an analogous analysis which combines the mass and the linear momentum conservation equations. For a calorically...governing mass, linear momentum and energy conservation and heat conduction equations are solved within ABAQUS/ Explicit with a second-order accurate

  2. Isolating the cow-specific part of residual energy intake in lactating dairy cows using random regressions.

    PubMed

    Fischer, A; Friggens, N C; Berry, D P; Faverdin, P

    2018-07-01

    The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.

  3. School system evaluation by value added analysis under endogeneity.

    PubMed

    Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien

    2014-01-01

    Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.

  4. An MCNP-based model of a medical linear accelerator x-ray photon beam.

    PubMed

    Ajaj, F A; Ghassal, N M

    2003-09-01

    The major components in the x-ray photon beam path of the treatment head of the VARIAN Clinac 2300 EX medical linear accelerator were modeled and simulated using the Monte Carlo N-Particle radiation transport computer code (MCNP). Simulated components include x-ray target, primary conical collimator, x-ray beam flattening filter and secondary collimators. X-ray photon energy spectra and angular distributions were calculated using the model. The x-ray beam emerging from the secondary collimators were scored by considering the total x-ray spectra from the target as the source of x-rays at the target position. The depth dose distribution and dose profiles at different depths and field sizes have been calculated at a nominal operating potential of 6 MV and found to be within acceptable limits. It is concluded that accurate specification of the component dimensions, composition and nominal accelerating potential gives a good assessment of the x-ray energy spectra.

  5. Development and Validation of RP-LC Method for the Determination of Cinnarizine/Piracetam and Cinnarizine/Heptaminol Acefyllinate in Presence of Cinnarizine Reported Degradation Products

    PubMed Central

    EL-Houssini, Ola M.; Zawilla, Nagwan H.; Mohammad, Mohammad A.

    2013-01-01

    Specific stability indicating reverse-phase liquid chromatography (RP-LC) assay method (SIAM) was developed for the determination of cinnarizine (Cinn)/piracetam (Pira) and cinnarizine (Cinn)/heptaminol acefyllinate (Hept) in the presence of the reported degradation products of Cinn. A C18 column and gradient mobile phase was applied for good resolution of all peaks. The detection was achieved at 210 nm and 254 nm for Cinn/Pira and Cinn/Hept, respectively. The responses were linear over concentration ranges of 20–200, 20–1000 and 25–1000 μgmL−1 for Cinn, Pira, and Hept respectively. The proposed method was validated for linearity, accuracy, repeatability, intermediate precision, and robustness via statistical analysis of the data. The method was shown to be precise, accurate, reproducible, sensitive, and selective for the analysis of Cinn/Pira and Cinn/Hept in laboratory prepared mixtures and in pharmaceutical formulations. PMID:24137049

  6. Development and validation of RP-UHPLC procedure for estimation of 5-amino salicyclic acid in 5-amino salicyclic acid rectal suppositories

    NASA Astrophysics Data System (ADS)

    Balaji, Jayagopal; Shivashankar, Murugesh

    2017-11-01

    The present study describes a simple and robust reverse phase ultra performance liquid chromatography (RP-UPLC) method for the quantification of 5-amino salicyclic acid in 5-amino salicyclic acid rectal capsules. Successful separation of Mesalamine peak from excipient peaks and diluent were achieved on a Acquity C8 (50 × 2.1 mm, 1.7 μm) and UV detector at 254 nm, 0.3 mL/min as a flow rate, and 3 μL as an injection volume. For the RP-UPLC method, phosphate buffer and methanol was used as mobile phases at ratio of 83:17 and the column temperature was 25 °C. Percentage recovery obtained in the range of 98.7 - 99.7 % and the method is linear for Mesalamine for specified concentration range with coefficient of variation (r) not less than 0.99. The proposed RP-UPLC method was found to be specific, linear, precise, accurate and robust.

  7. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  8. Evaluation of airborne lidar data to predict vegetation Presence/Absence

    USGS Publications Warehouse

    Palaseanu-Lovejoy, M.; Nayegandhi, A.; Brock, J.; Woodman, R.; Wright, C.W.

    2009-01-01

    This study evaluates the capabilities of the Experimental Advanced Airborne Research Lidar (EAARL) in delineating vegetation assemblages in Jean Lafitte National Park, Louisiana. Five-meter-resolution grids of bare earth, canopy height, canopy-reflection ratio, and height of median energy were derived from EAARL data acquired in September 2006. Ground-truth data were collected along transects to assess species composition, canopy cover, and ground cover. To decide which model is more accurate, comparisons of general linear models and generalized additive models were conducted using conventional evaluation methods (i.e., sensitivity, specificity, Kappa statistics, and area under the curve) and two new indexes, net reclassification improvement and integrated discrimination improvement. Generalized additive models were superior to general linear models in modeling presence/absence in training vegetation categories, but no statistically significant differences between the two models were achieved in determining the classification accuracy at validation locations using conventional evaluation methods, although statistically significant improvements in net reclassifications were observed. ?? 2009 Coastal Education and Research Foundation.

  9. Revisiting Isotherm Analyses Using R: Comparison of Linear, Non-linear, and Bayesian Techniques

    EPA Science Inventory

    Extensive adsorption isotherm data exist for an array of chemicals of concern on a variety of engineered and natural sorbents. Several isotherm models exist that can accurately describe these data from which the resultant fitting parameters may subsequently be used in numerical ...

  10. Simultaneous quantification of withanolides in Withania somnifera by a validated high-performance thin-layer chromatographic method.

    PubMed

    Srivastava, Pooja; Tiwari, Neerja; Yadav, Akhilesh K; Kumar, Vijendra; Shanker, Karuna; Verma, Ram K; Gupta, Madan M; Gupta, Anil K; Khanuja, Suman P S

    2008-01-01

    This paper describes a sensitive, selective, specific, robust, and validated densitometric high-performance thin-layer chromatographic (HPTLC) method for the simultaneous determination of 3 key withanolides, namely, withaferin-A, 12-deoxywithastramonolide, and withanolide-A, in Ashwagandha (Withania somnifera) plant samples. The separation was performed on aluminum-backed silica gel 60F254 HPTLC plates using dichloromethane-methanol-acetone-diethyl ether (15 + 1 + 1 + 1, v/v/v/v) as the mobile phase. The withanolides were quantified by densitometry in the reflection/absorption mode at 230 nm. Precise and accurate quantification could be performed in the linear working concentration range of 66-330 ng/band with good correlation (r2 = 0.997, 0.999, and 0.996, respectively). The method was validated for recovery, precision, accuracy, robustness, limit of detection, limit of quantitation, and specificity according to International Conference on Harmonization guidelines. Specificity of quantification was confirmed using retention factor (Rf) values, UV-Vis spectral correlation, and electrospray ionization mass spectra of marker compounds in sample tracks.

  11. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  12. A multiplex serologic platform for diagnosis of tick-borne diseases.

    PubMed

    Tokarz, Rafal; Mishra, Nischay; Tagliafierro, Teresa; Sameroff, Stephen; Caciula, Adrian; Chauhan, Lokendrasingh; Patel, Jigar; Sullivan, Eric; Gucwa, Azad; Fallon, Brian; Golightly, Marc; Molins, Claudia; Schriefer, Martin; Marques, Adriana; Briese, Thomas; Lipkin, W Ian

    2018-02-16

    Tick-borne diseases are the most common vector-borne diseases in the United States, with serology being the primary method of diagnosis. We developed the first multiplex, array-based assay for serodiagnosis of tick-borne diseases called the TBD-Serochip. The TBD-Serochip was designed to discriminate antibody responses to 8 major tick-borne pathogens present in the United States, including Anaplasma phagocytophilum, Babesia microti, Borrelia burgdorferi, Borrelia miyamotoi, Ehrlichia chaffeensis, Rickettsia rickettsii, Heartland virus and Powassan virus. Each assay contains approximately 170,000 12-mer linear peptides that tile along the protein sequence of the major antigens from each agent with 11 amino acid overlap. This permits accurate identification of a wide range of specific immunodominant IgG and IgM epitopes that can then be used to enhance diagnostic accuracy and integrate differential diagnosis into a single assay. To test the performance of the TBD-Serochip, we examined sera from patients with confirmed Lyme disease, babesiosis, anaplasmosis, and Powassan virus disease. We identified a wide range of specific discriminatory epitopes that facilitated accurate diagnosis of each disease. We also identified previously undiagnosed infections. Our results indicate that the TBD-Serochip is a promising tool for a differential diagnosis not available with currently employed serologic assays for TBDs.

  13. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    PubMed

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  14. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  15. Comparison of osmolality and refractometric readings of Hispaniolan Amazon parrot (Amazona ventralis) urine.

    PubMed

    Brock, A Paige; Grunkemeyer, Vanessa L; Fry, Michael M; Hall, James S; Bartges, Joseph W

    2013-12-01

    To evaluate the relationship between osmolality and specific gravity of urine samples from clinically normal adult parrots and to determine a formula to convert urine specific gravity (USG) measured on a reference scale to a more accurate USG value for an avian species, urine samples were collected opportunistically from a colony of Hispaniolan Amazon parrots (Amazona ventralis). Samples were analyzed by using a veterinary refractometer, and specific gravity was measured on both canine and feline scales. Osmolality was measured by vapor pressure osmometry. Specific gravity and osmolality measurements were highly correlated (r = 0.96). The linear relationship between refractivity measurements on a reference scale and osmolality was determined. An equation was calculated to allow specific gravity results from a medical refractometer to be converted to specific gravity values of Hispaniolan Amazon parrots: USGHAp = 0.201 +0.798(USGref). Use of the reference-canine scale to approximate the osmolality of parrot urine leads to an overestimation of the true osmolality of the sample. In addition, this error increases as the concentration of urine increases. Compared with the human-canine scale, the feline scale provides a closer approximation to urine osmolality of Hispaniolan Amazon parrots but still results in overestimation of osmolality.

  16. An effective description of dark matter and dark energy in the mildly non-linear regime

    DOE PAGES

    Lewandowski, Matthew; Maleknejad, Azadeh; Senatore, Leonardo

    2017-05-18

    In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the informationmore » resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. Furthermore, the Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.« less

  17. The role of shoe design on the prediction of free torque at the shoe-surface interface using pressure insole technology.

    PubMed

    Weaver, Brian Thomas; Fitzsimons, Kathleen; Braman, Jerrod; Haut, Roger

    2016-09-01

    The goal of the current study was to expand on previous work to validate the use of pressure insole technology in conjunction with linear regression models to predict the free torque at the shoe-surface interface that is generated while wearing different athletic shoes. Three distinctly different shoe designs were utilised. The stiffness of each shoe was determined with a material's testing machine. Six participants wore each shoe that was fitted with an insole pressure measurement device and performed rotation trials on an embedded force plate. A pressure sensor mask was constructed from those sensors having a high linear correlation with free torque values. Linear regression models were developed to predict free torques from these pressure sensor data. The models were able to accurately predict their own free torque well (RMS error 3.72 ± 0.74 Nm), but not that of the other shoes (RMS error 10.43 ± 3.79 Nm). Models performing self-prediction were also able to measure differences in shoe stiffness. The results of the current study showed the need for participant-shoe specific linear regression models to insure high prediction accuracy of free torques from pressure sensor data during isolated internal and external rotations of the body with respect to a planted foot.

  18. An effective description of dark matter and dark energy in the mildly non-linear regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewandowski, Matthew; Maleknejad, Azadeh; Senatore, Leonardo

    In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the informationmore » resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. Furthermore, the Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.« less

  19. An effective description of dark matter and dark energy in the mildly non-linear regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewandowski, Matthew; Senatore, Leonardo; Maleknejad, Azadeh, E-mail: matthew.lewandowski@cea.fr, E-mail: azade@ipm.ir, E-mail: senatore@stanford.edu

    In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the informationmore » resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. The Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.« less

  20. Accurate core position control in polymer optical waveguides using the Mosquito method for three-dimensional optical wiring

    NASA Astrophysics Data System (ADS)

    Date, Kumi; Ishigure, Takaaki

    2017-02-01

    Polymer optical waveguides with graded-index (GI) circular cores are fabricated using the Mosquito method, in which the positions of parallel cores are accurately controlled. Such an accurate arrangement is of great importance for a high optical coupling efficiency with other optical components such as fiber ribbons. In the Mosquito method that we developed, a core monomer with a viscous liquid state is dispensed into another liquid state monomer for cladding via a syringe needle. Hence, the core positions are likely to shift during or after the dispensing process due to several factors. We investigate the factors, specifically affecting the core height. When the core and cladding monomers are selected appropriately, the effect of the gravity could be negligible, so the core height is maintained uniform, resulting in accurate core heights. The height variance is controlled in +/-2 micrometers for the 12 cores. Meanwhile, larger shift in the core height is observed when the needle-tip position is apart from the substrate surface. One of the possible reasons of the needle-tip height dependence is the asymmetric volume contraction during the monomer curing. We find a linear relationship between the original needle-tip height and the core-height observed. This relationship is implemented in the needle-scan program to stabilize the core height in different layers. Finally, the core heights are accurately controlled even if the cores are aligned on various heights. These results indicate that the Mosquito method enables to fabricate waveguides in which the cores are 3-dimensionally aligned with a high position accuracy.

  1. From Spiking Neuron Models to Linear-Nonlinear Models

    PubMed Central

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-01

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777

  2. From spiking neuron models to linear-nonlinear models.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  3. Steps toward quantitative infrasound propagation modeling

    NASA Astrophysics Data System (ADS)

    Waxler, Roger; Assink, Jelle; Lalande, Jean-Marie; Velea, Doru

    2016-04-01

    Realistic propagation modeling requires propagation models capable of incorporating the relevant physical phenomena as well as sufficiently accurate atmospheric specifications. The wind speed and temperature gradients in the atmosphere provide multiple ducts in which low frequency sound, infrasound, can propagate efficiently. The winds in the atmosphere are quite variable, both temporally and spatially, causing the sound ducts to fluctuate. For ground to ground propagation the ducts can be borderline in that small perturbations can create or destroy a duct. In such cases the signal propagation is very sensitive to fluctuations in the wind, often producing highly dispersed signals. The accuracy of atmospheric specifications is constantly improving as sounding technology develops. There is, however, a disconnect between sound propagation and atmospheric specification in that atmospheric specifications are necessarily statistical in nature while sound propagates through a particular atmospheric state. In addition infrasonic signals can travel to great altitudes, on the order of 120 km, before refracting back to earth. At such altitudes the atmosphere becomes quite rare causing sound propagation to become highly non-linear and attenuating. Approaches to these problems will be presented.

  4. Estimating thermal diffusivity and specific heat from needle probe thermal conductivity data

    USGS Publications Warehouse

    Waite, W.F.; Gilbert, L.Y.; Winters, W.J.; Mason, D.H.

    2006-01-01

    Thermal diffusivity and specific heat can be estimated from thermal conductivity measurements made using a standard needle probe and a suitably high data acquisition rate. Thermal properties are calculated from the measured temperature change in a sample subjected to heating by a needle probe. Accurate thermal conductivity measurements are obtained from a linear fit to many tens or hundreds of temperature change data points. In contrast, thermal diffusivity calculations require a nonlinear fit to the measured temperature change occurring in the first few tenths of a second of the measurement, resulting in a lower accuracy than that obtained for thermal conductivity. Specific heat is calculated from the ratio of thermal conductivity to diffusivity, and thus can have an uncertainty no better than that of the diffusivity estimate. Our thermal conductivity measurements of ice Ih and of tetrahydrofuran (THF) hydrate, made using a 1.6 mm outer diameter needle probe and a data acquisition rate of 18.2 pointss, agree with published results. Our thermal diffusivity and specific heat results reproduce published results within 25% for ice Ih and 3% for THF hydrate. ?? 2006 American Institute of Physics.

  5. Testing approximations for non-linear gravitational clustering

    NASA Technical Reports Server (NTRS)

    Coles, Peter; Melott, Adrian L.; Shandarin, Sergei F.

    1993-01-01

    The accuracy of various analytic approximations for following the evolution of cosmological density fluctuations into the nonlinear regime is investigated. The Zel'dovich approximation is found to be consistently the best approximation scheme. It is extremely accurate for power spectra characterized by n = -1 or less; when the approximation is 'enhanced' by truncating highly nonlinear Fourier modes the approximation is excellent even for n = +1. The performance of linear theory is less spectrum-dependent, but this approximation is less accurate than the Zel'dovich one for all cases because of the failure to treat dynamics. The lognormal approximation generally provides a very poor fit to the spatial pattern.

  6. The solution of the point kinetics equations via converged accelerated Taylor series (CATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.; Picca, P.; Previti, A.

    This paper deals with finding accurate solutions of the point kinetics equations including non-linear feedback, in a fast, efficient and straightforward way. A truncated Taylor series is coupled to continuous analytical continuation to provide the recurrence relations to solve the ordinary differential equations of point kinetics. Non-linear (Wynn-epsilon) and linear (Romberg) convergence accelerations are employed to provide highly accurate results for the evaluation of Taylor series expansions and extrapolated values of neutron and precursor densities at desired edits. The proposed Converged Accelerated Taylor Series, or CATS, algorithm automatically performs successive mesh refinements until the desired accuracy is obtained, making usemore » of the intermediate results for converged initial values at each interval. Numerical performance is evaluated using case studies available from the literature. Nearly perfect agreement is found with the literature results generally considered most accurate. Benchmark quality results are reported for several cases of interest including step, ramp, zigzag and sinusoidal prescribed insertions and insertions with adiabatic Doppler feedback. A larger than usual (9) number of digits is included to encourage honest benchmarking. The benchmark is then applied to the enhanced piecewise constant algorithm (EPCA) currently being developed by the second author. (authors)« less

  7. A computationally efficient scheme for the non-linear diffusion equation

    NASA Astrophysics Data System (ADS)

    Termonia, P.; Van de Vyver, H.

    2009-04-01

    This Letter proposes a new numerical scheme for integrating the non-linear diffusion equation. It is shown that it is linearly stable. Some tests are presented comparing this scheme to a popular decentered version of the linearized Crank-Nicholson scheme, showing that, although this scheme is slightly less accurate in treating the highly resolved waves, (i) the new scheme better treats highly non-linear systems, (ii) better handles the short waves, (iii) for a given test bed turns out to be three to four times more computationally cheap, and (iv) is easier in implementation.

  8. 3D patient-specific models for left atrium characterization to support ablation in atrial fibrillation patients.

    PubMed

    Valinoti, Maddalena; Fabbri, Claudio; Turco, Dario; Mantovan, Roberto; Pasini, Antonio; Corsi, Cristiana

    2018-01-01

    Radiofrequency ablation (RFA) is an important and promising therapy for atrial fibrillation (AF) patients. Optimization of patient selection and the availability of an accurate anatomical guide could improve RFA success rate. In this study we propose a unified, fully automated approach to build a 3D patient-specific left atrium (LA) model including pulmonary veins (PVs) in order to provide an accurate anatomical guide during RFA and without PVs in order to characterize LA volumetry and support patient selection for AF ablation. Magnetic resonance data from twenty-six patients referred for AF RFA were processed applying an edge-based level set approach guided by a phase-based edge detector to obtain the 3D LA model with PVs. An automated technique based on the shape diameter function was designed and applied to remove PVs and compute LA volume. 3D LA models were qualitatively compared with 3D LA surfaces acquired during the ablation procedure. An expert radiologist manually traced the LA on MR images twice. LA surfaces from the automatic approach and manual tracing were compared by mean surface-to-surface distance. In addition, LA volumes were compared with volumes from manual segmentation by linear and Bland-Altman analyses. Qualitative comparison of 3D LA models showed several inaccuracies, in particular PVs reconstruction was not accurate and left atrial appendage was missing in the model obtained during RFA procedure. LA surfaces were very similar (mean surface-to-surface distance: 2.3±0.7mm). LA volumes were in excellent agreement (y=1.03x-1.4, r=0.99, bias=-1.37ml (-1.43%) SD=2.16ml (2.3%), mean percentage difference=1.3%±2.1%). Results showed the proposed 3D patient-specific LA model with PVs is able to better describe LA anatomy compared to models derived from the navigation system, thus potentially improving electrograms and voltage information location and reducing fluoroscopic time during RFA. Quantitative assessment of LA volume derived from our 3D LA model without PVs is also accurate and may provide important information for patient selection for RFA. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures.

    PubMed

    Yehia, Ali M

    2013-05-15

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.

    2013-05-01

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.

  11. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  12. Simultaneous Determination of Potassium Clavulanate and Amoxicillin Trihydrate in Bulk, Pharmaceutical Formulations and in Human Urine Samples by UV Spectrophotometry

    PubMed Central

    Gujral, Rajinder Singh; Haque, Sk Manirul

    2010-01-01

    A simple and sensitive UV spectrophotometric method was developed and validated for the simultaneous determination of Potassium Clavulanate (PC) and Amoxicillin Trihydrate (AT) in bulk, pharmaceutical formulations and in human urine samples. The method was linear in the range of 0.2–8.5 μg/ml for PC and 6.4–33.6 μg/ml for AT. The absorbance was measured at 205 and 271 nm for PC and AT respectively. The method was validated with respect to accuracy, precision, specificity, ruggedness, robustness, limit of detection and limit of quantitation. This method was used successfully for the quality assessment of four PC and AT drug products and in human urine samples with good precision and accuracy. This is found to be simple, specific, precise, accurate, reproducible and low cost UV Spectrophotometric method. PMID:23675211

  13. Novel spectrophotometric determination of flumethasone pivalate and clioquinol in their binary mixture and pharmaceutical formulation.

    PubMed

    Abdel-Aleem, Eglal A; Hegazy, Maha A; Sayed, Nour W; Abdelkawy, M; Abdelfatah, Rehab M

    2015-02-05

    This work is concerned with development and validation of three simple, specific, accurate and precise spectrophotometric methods for determination of flumethasone pivalate (FP) and clioquinol (CL) in their binary mixture and ear drops. Method A is a ratio subtraction spectrophotometric one (RSM). Method B is a ratio difference spectrophotometric one (RDSM), while method C is a mean center spectrophotometric one (MCR). The calibration curves are linear over the concentration range of 3-45 μg/mL for FP, and 2-25 μg/mL for CL. The specificity of the developed methods was assessed by analyzing different laboratory prepared mixtures of the FP and CL. The three methods were validated as per ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Estimation of sex and stature using anthropometry of the upper extremity in an Australian population.

    PubMed

    Howley, Donna; Howley, Peter; Oxenham, Marc F

    2018-06-01

    Stature and a further 8 anthropometric dimensions were recorded from the arms and hands of a sample of 96 staff and students from the Australian National University and The University of Newcastle, Australia. These dimensions were used to create simple and multiple logistic regression models for sex estimation and simple and multiple linear regression equations for stature estimation of a contemporary Australian population. Overall sex classification accuracies using the models created were comparable to similar studies. The stature estimation models achieved standard errors of estimates (SEE) which were comparable to and in many cases lower than those achieved in similar research. Generic, non sex-specific models achieved similar SEEs and R 2 values to the sex-specific models indicating stature may be accurately estimated when sex is unknown. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. On real-space Density Functional Theory for non-orthogonal crystal systems: Kronecker product formulation of the kinetic energy operator

    NASA Astrophysics Data System (ADS)

    Sharma, Abhiraj; Suryanarayana, Phanish

    2018-05-01

    We present an accurate and efficient real-space Density Functional Theory (DFT) framework for the ab initio study of non-orthogonal crystal systems. Specifically, employing a local reformulation of the electrostatics, we develop a novel Kronecker product formulation of the real-space kinetic energy operator that significantly reduces the number of operations associated with the Laplacian-vector multiplication, the dominant cost in practical computations. In particular, we reduce the scaling with respect to finite-difference order from quadratic to linear, thereby significantly bridging the gap in computational cost between non-orthogonal and orthogonal systems. We verify the accuracy and efficiency of the proposed methodology through selected examples.

  16. Nonlinear Modeling by Assembling Piecewise Linear Models

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  17. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less

  18. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    PubMed

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  19. Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?

    NASA Astrophysics Data System (ADS)

    Everett, R. A.; Packer, A. M.; Kuang, Y.

    Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.

  20. Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?

    NASA Astrophysics Data System (ADS)

    Everett, R. A.; Packer, A. M.; Kuang, Y.

    2014-04-01

    Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.

  1. A novel Alu-based real-time PCR method for the quantitative detection of plasma circulating cell-free DNA: Sensitivity and specificity for the diagnosis of myocardial infarction

    PubMed Central

    LOU, XIAOLI; HOU, YANQIANG; LIANG, DONGYU; PENG, LIANG; CHEN, HONGWEI; MA, SHANYUAN; ZHANG, LURONG

    2015-01-01

    In the present study, we aimed to develop and validate a rapid and sensitive, Alu-based real-time PCR method for the detection of circulating cell-free DNA (cfDNA). This method targeted repetitive elements of the Alu reduplicative elements in the human genome, followed by signal amplification using fluorescence quantification. Standard Alu-puc57 vectors were constructed and 5 pairs of specific primers were designed. Valuation was conducted concerning linearity, variation and recovery. We found 5 linear responses (R1–5=0.998–0.999). The average intra- and inter-assay coefficients of variance were 12.98 and 10.75%, respectively. The recovery was 82.33–114.01%, with a mean recovery index of 101.26%. This Alu-based assay was reliable, accurate and sensitive for the quantitative detection of cfDNA. Plasma from normal controls and patients with myocardial infarction (MI) were analyzed, and the baseline levels of cfDNA were higher in the MI group. The area under the receiver operating characteristic (ROC) curve for Alu1, Alu2, Alu3, Alu4, Alu5 and Alu (Alu1 + Alu2 + Alu3 + Alu4 + Alu5) was 0.887, 0.758, 0.857, 0.940, 0.968 and 0.933, respectively. The optimal cut-off value for Alu1, Alu2, Alu3, Alu4, Alu5 and Alu to predict MI was 3.71, 1.93, 0.22, 3.73, 6.13 and 6.40 log copies/ml. We demonstrate that this new method is a reliable, accurate and sensitive method for the quantitative detection of cfDNA and that it is useful for studying the regulation of cfDNA in certain pathological conditions. Alu4, Alu5 and Alu showed better sensitivity and specificity for the diagnosis of MI compared with cardiac troponin I (cTnI), creatine kinase MB (CK-MB) isoenzyme and lactate dehydrogenase (LDH). Alu5 had the best prognostic ability. PMID:25374065

  2. Algorithms for Hyperspectral Endmember Extraction and Signature Classification with Morphological Dendritic Networks

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Ritter, G.

    Accurate multispectral or hyperspectral signature classification is key to the nonimaging detection and recognition of space objects. Additionally, signature classification accuracy depends on accurate spectral endmember determination [1]. Previous approaches to endmember computation and signature classification were based on linear operators or neural networks (NNs) expressed in terms of the algebra (R, +, x) [1,2]. Unfortunately, class separation in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of NN inputs. This can lead to poor endmember distinction, as well as potentially significant classification errors in the presence of noise or densely interleaved signatures. In contrast to traditional CNNs, autoassociative morphological memories (AMM) are a construct similar to Hopfield autoassociatived memories defined on the (R, +, ?,?) lattice algebra [3]. Unlimited storage and perfect recall of noiseless real valued patterns has been proven for AMMs [4]. However, AMMs suffer from sensitivity to specific noise models, that can be characterized as erosive and dilative noise. On the other hand, the prior definition of a set of endmembers corresponds to material spectra lying on vertices of the minimum convex region covering the image data. These vertices can be characterized as morphologically independent patterns. It has further been shown that AMMs can be based on dendritic computation [3,6]. These techniques yield improved accuracy and class segmentation/separation ability in the presence of highly interleaved signature data. In this paper, we present a procedure for endmember determination based on AMM noise sensitivity, which employs morphological dendritic computation. We show that detected endmembers can be exploited by AMM based classification techniques, to achieve accurate signature classification in the presence of noise, closely spaced or interleaved signatures, and simulated camera optical distortions. In particular, we examine two critical cases: (1) classification of multiple closely spaced signatures that are difficult to separate using distance measures, and (2) classification of materials in simulated hyperspectral images of spaceborne satellites. In each case, test data are derived from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to classical NN based techniques.

  3. Reverse engineering and analysis of large genome-scale gene networks

    PubMed Central

    Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas

    2013-01-01

    Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249

  4. Linear Self-Referencing Techiques for Short-Optical-Pulse Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorrer, C.; Kang, I.

    2008-04-04

    Linear self-referencing techniques for the characterization of the electric field of short optical pulses are presented. The theoretical and practical advantages of these techniques are developed. Experimental implementations are described, and their performance is compared to the performance of their nonlinear counterparts. Linear techniques demonstrate unprecedented sensitivity and are a perfect fit in many domains where the precise, accurate measurement of the electric field of an optical pulse is required.

  5. Quantitative phase-filtered wavelength-modulated differential photoacoustic radar tumor hypoxia imaging toward early cancer detection.

    PubMed

    Dovlo, Edem; Lashkari, Bahman; Soo Sean Choi, Sung; Mandelis, Andreas; Shi, Wei; Liu, Fei-Fei

    2017-09-01

    Overcoming the limitations of conventional linear spectroscopy used in multispectral photoacoustic imaging, wherein a linear relationship is assumed between the absorbed optical energy and the absorption spectra of the chromophore at a specific location, is crucial for obtaining accurate spatially-resolved quantitative functional information by exploiting known chromophore-specific spectral characteristics. This study introduces a non-invasive phase-filtered differential photoacoustic technique, wavelength-modulated differential photoacoustic radar (WM-DPAR) imaging that addresses this issue by eliminating the effect of the unknown wavelength-dependent fluence. It employs two laser wavelengths modulated out-of-phase to significantly suppress background absorption while amplifying the difference between the two photoacoustic signals. This facilitates pre-malignant tumor identification and hypoxia monitoring, as minute changes in total hemoglobin concentration and hemoglobin oxygenation are detectable. The system can be tuned for specific applications such as cancer screening and SO 2 quantification by regulating the amplitude ratio and phase shift of the signal. The WM-DPAR imaging of a head and neck carcinoma tumor grown in the thigh of a nude rat demonstrates the functional PA imaging of small animals in vivo. The PA appearance of the tumor in relation to tumor vascularity is investigated by immunohistochemistry. Phase-filtered WM-DPAR imaging is also illustrated, maximizing quantitative SO 2 imaging fidelity of tissues. Oxygenation levels within a tumor grown in the thigh of a nude rat using the two-wavelength phase-filtered differential PAR method. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Growth and yield in Eucalyptus globulus

    Treesearch

    James A. Rinehart; Richard B. Standiford

    1983-01-01

    A study of the major Eucalyptus globulus stands throughout California conducted by Woodbridge Metcalf in 1924 provides a complete and accurate data set for generating variable site-density yield models. Two models were developed using linear regression techniques. Model I depicts a linear relationship between age and yield best used for stands between five and fifteen...

  7. Identification and compensation of friction for a novel two-axis differential micro-feed system

    NASA Astrophysics Data System (ADS)

    Du, Fuxin; Zhang, Mingyang; Wang, Zhaoguo; Yu, Chen; Feng, Xianying; Li, Peigang

    2018-06-01

    Non-linear friction in a conventional drive feed system (CDFS) feeding at low speed is one of the main factors that lead to the complexity of the feed drive. The CDFS will inevitably enter or approach a non-linear creeping work area at extremely low speed. A novel two-axis differential micro-feed system (TDMS) is developed in this paper to overcome the accuracy limitation of CDFS. A dynamic model of TDMS is first established. Then, a novel all-component friction parameter identification method (ACFPIM) using a genetic algorithm (GA) to identify the friction parameters of a TDMS is introduced. The friction parameters of the ball screw and linear motion guides are identified independently using the method, assuring the accurate modelling of friction force at all components. A proportional-derivate feed drive position controller with an observer-based friction compensator is implemented to achieve an accurate trajectory tracking performance. Finally, comparative experiments demonstrate the effectiveness of the TDMS in inhibiting the disadvantageous influence of non-linear friction and the validity of the proposed identification method for TDMS.

  8. Quantifying circular RNA expression from RNA-seq data using model-based framework.

    PubMed

    Li, Musheng; Xie, Xueying; Zhou, Jing; Sheng, Mengying; Yin, Xiaofeng; Ko, Eun-A; Zhou, Tong; Gu, Wanjun

    2017-07-15

    Circular RNAs (circRNAs) are a class of non-coding RNAs that are widely expressed in various cell lines and tissues of many organisms. Although the exact function of many circRNAs is largely unknown, the cell type-and tissue-specific circRNA expression has implicated their crucial functions in many biological processes. Hence, the quantification of circRNA expression from high-throughput RNA-seq data is becoming important to ascertain. Although many model-based methods have been developed to quantify linear RNA expression from RNA-seq data, these methods are not applicable to circRNA quantification. Here, we proposed a novel strategy that transforms circular transcripts to pseudo-linear transcripts and estimates the expression values of both circular and linear transcripts using an existing model-based algorithm, Sailfish. The new strategy can accurately estimate transcript expression of both linear and circular transcripts from RNA-seq data. Several factors, such as gene length, amount of expression and the ratio of circular to linear transcripts, had impacts on quantification performance of circular transcripts. In comparison to count-based tools, the new computational framework had superior performance in estimating the amount of circRNA expression from both simulated and real ribosomal RNA-depleted (rRNA-depleted) RNA-seq datasets. On the other hand, the consideration of circular transcripts in expression quantification from rRNA-depleted RNA-seq data showed substantial increased accuracy of linear transcript expression. Our proposed strategy was implemented in a program named Sailfish-cir. Sailfish-cir is freely available at https://github.com/zerodel/Sailfish-cir . tongz@medicine.nevada.edu or wanjun.gu@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. A new modal superposition method for nonlinear vibration analysis of structures using hybrid mode shapes

    NASA Astrophysics Data System (ADS)

    Ferhatoglu, Erhan; Cigeroglu, Ender; Özgüven, H. Nevzat

    2018-07-01

    In this paper, a new modal superposition method based on a hybrid mode shape concept is developed for the determination of steady state vibration response of nonlinear structures. The method is developed specifically for systems having nonlinearities where the stiffness of the system may take different limiting values. Stiffness variation of these nonlinear systems enables one to define different linear systems corresponding to each value of the limiting equivalent stiffness. Moreover, the response of the nonlinear system is bounded by the confinement of these linear systems. In this study, a modal superposition method utilizing novel hybrid mode shapes which are defined as linear combinations of the modal vectors of the limiting linear systems is proposed to determine periodic response of nonlinear systems. In this method the response of the nonlinear system is written in terms of hybrid modes instead of the modes of the underlying linear system. This provides decrease of the number of modes that should be retained for an accurate solution, which in turn reduces the number of nonlinear equations to be solved. In this way, computational time for response calculation is directly curtailed. In the solution, the equations of motion are converted to a set of nonlinear algebraic equations by using describing function approach, and the numerical solution is obtained by using Newton's method with arc-length continuation. The method developed is applied on two different systems: a lumped parameter model and a finite element model. Several case studies are performed and the accuracy and computational efficiency of the proposed modal superposition method with hybrid mode shapes are compared with those of the classical modal superposition method which utilizes the mode shapes of the underlying linear system.

  10. Evaluation of Specific Absorption Rate as a Dosimetric Quantity for Electromagnetic Fields Bioeffects

    PubMed Central

    Panagopoulos, Dimitris J.; Johansson, Olle; Carlo, George L.

    2013-01-01

    Purpose To evaluate SAR as a dosimetric quantity for EMF bioeffects, and identify ways for increasing the precision in EMF dosimetry and bioactivity assessment. Methods We discuss the interaction of man-made electromagnetic waves with biological matter and calculate the energy transferred to a single free ion within a cell. We analyze the physics and biology of SAR and evaluate the methods of its estimation. We discuss the experimentally observed non-linearity between electromagnetic exposure and biological effect. Results We find that: a) The energy absorbed by living matter during exposure to environmentally accounted EMFs is normally well below the thermal level. b) All existing methods for SAR estimation, especially those based upon tissue conductivity and internal electric field, have serious deficiencies. c) The only method to estimate SAR without large error is by measuring temperature increases within biological tissue, which normally are negligible for environmental EMF intensities, and thus cannot be measured. Conclusions SAR actually refers to thermal effects, while the vast majority of the recorded biological effects from man-made non-ionizing environmental radiation are non-thermal. Even if SAR could be accurately estimated for a whole tissue, organ, or body, the biological/health effect is determined by tiny amounts of energy/power absorbed by specific biomolecules, which cannot be calculated. Moreover, it depends upon field parameters not taken into account in SAR calculation. Thus, SAR should not be used as the primary dosimetric quantity, but used only as a complementary measure, always reporting the estimating method and the corresponding error. Radiation/field intensity along with additional physical parameters (such as frequency, modulation etc) which can be directly and in any case more accurately measured on the surface of biological tissues, should constitute the primary measure for EMF exposures, in spite of similar uncertainty to predict the biological effect due to non-linearity. PMID:23750202

  11. Tailored liquid chromatography-mass spectrometry analysis improves the coverage of the intracellular metabolome of HepaRG cells.

    PubMed

    Cuykx, Matthias; Negreira, Noelia; Beirnaert, Charlie; Van den Eede, Nele; Rodrigues, Robim; Vanhaecke, Tamara; Laukens, Kris; Covaci, Adrian

    2017-03-03

    Metabolomics protocols are often combined with Liquid Chromatography-Mass Spectrometry (LC-MS) using mostly reversed phase chromatography coupled to accurate mass spectrometry, e.g. quadrupole time-of-flight (QTOF) mass spectrometers to measure as many metabolites as possible. In this study, we optimised the LC-MS separation of cell extracts after fractionation in polar and non-polar fractions. Both phases were analysed separately in a tailored approach in four different runs (two for the non-polar and two for the polar-fraction), each of them specifically adapted to improve the separation of the metabolites present in the extract. This approach improves the coverage of a broad range of the metabolome of the HepaRG cells and the separation of intra-class metabolites. The non-polar fraction was analysed using a C18-column with end-capping, mobile phase compositions were specifically adapted for each ionisation mode using different co-solvents and buffers. The polar extracts were analysed with a mixed mode Hydrophilic Interaction Liquid Chromatography (HILIC) system. Acidic metabolites from glycolysis and the Krebs cycle, together with phosphorylated compounds, were best detected with a method using ion pairing (IP) with tributylamine and separation on a phenyl-hexyl column. Accurate mass detection was performed with the QTOF in MS-mode only using an extended dynamic range to improve the quality of the dataset. Parameters with the greatest impact on the detection were the balance between mass accuracy and linear range, the fragmentor voltage, the capillary voltage, the nozzle voltage, and the nebuliser pressure. By using a tailored approach for the intracellular HepaRG metabolome, consisting of three different LC techniques, over 2200 metabolites can be measured with a high precision and acceptable linear range. The developed method is suited for qualitative untargeted LC-MS metabolomics studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Accurate ocean bottom seismometer positioning method inspired by multilateration technique

    USGS Publications Warehouse

    Benazzouz, Omar; Pinheiro, Luis M.; Matias, Luis M. A.; Afilhado, Alexandra; Herold, Daniel; Haines, Seth S.

    2018-01-01

    The positioning of ocean bottom seismometers (OBS) is a key step in the processing flow of OBS data, especially in the case of self popup types of OBS instruments. The use of first arrivals from airgun shots, rather than relying on the acoustic transponders mounted in the OBS, is becoming a trend and generally leads to more accurate positioning due to the statistics from a large number of shots. In this paper, a linearization of the OBS positioning problem via the multilateration technique is discussed. The discussed linear solution solves jointly for the average water layer velocity and the OBS position using only shot locations and first arrival times as input data.

  13. HCMM hydrological analysis in Utah

    NASA Technical Reports Server (NTRS)

    Miller, A. W. (Principal Investigator)

    1982-01-01

    The feasibility of applying a linear model to HCMM data in hopes of obtaining an accurate linear correlation was investigated. The relationship among HCMM sensed data surface temperature and red reflectivity on Utah Lake and water quality factors including algae concentrations, algae type, and nutrient and turbidity concentrations was established and evaluated. Correlation (composite) images of day infrared and reflectance imagery were assessed to determine if remote sensing offers the capability of using masses of accurate and comprehensive data in calculating evaporation. The effects of algae on temperature and evaporation were studied and the possibility of using satellite thermal data to locate areas within Utah Lake where significant thermal sources exist and areas of near surface groundwater was examined.

  14. Uncertainties in the estimation of specific absorption rate during radiofrequency alternating magnetic field induced non-adiabatic heating of ferrofluids

    NASA Astrophysics Data System (ADS)

    Lahiri, B. B.; Ranoo, Surojit; Philip, John

    2017-11-01

    Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the results are compared using a lumped system thermal model. The various uncertainties involved in SAR estimation are categorized as material uncertainties, thermodynamic uncertainties and parametric uncertainties. The adiabatic reconstruction is found to decrease the uncertainties in SAR measurement by approximately three times. Additionally, a set of experimental guidelines for accurate SAR estimation using adiabatic reconstruction protocol is also recommended. These results warrant a universal experimental and data analysis protocol for SAR measurements during field induced heating of magnetic fluids under non-adiabatic conditions.

  15. Accurate electrostatic and van der Waals pull-in prediction for fully clamped nano/micro-beams using linear universal graphs of pull-in instability

    NASA Astrophysics Data System (ADS)

    Tahani, Masoud; Askari, Amir R.

    2014-09-01

    In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.

  16. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE PAGES

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...

    2017-07-12

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  17. Estimation of L-dopa from Mucuna pruriens LINN and formulations containing M. pruriens by HPTLC method.

    PubMed

    Modi, Ketan Pravinbhai; Patel, Natvarlal Manilal; Goyal, Ramesh Kishorilal

    2008-03-01

    A selective, precise, and accurate high-performance thin-layer chromatographic (HPTLC) method has been developed for the analysis of L-dopa in Mucuna pruriens seed extract and its formulations. The method involves densitometric evaluation of L-dopa after resolving it by HPTLC on silica gel plates with n-butanol-acetic acid-water (4.0+1.0+1.0, v/v) as the mobile phase. Densitometric analysis of L-dopa was carried out in the absorbance mode at 280 nm. The relationship between the concentration of L-dopa and corresponding peak areas was found to be linear in the range of 100 to 1200 ng/spot. The method was validated for precision (inter and intraday), repeatability, and accuracy. Mean recovery was 100.30%. The relative standard deviation (RSD) values of the precision were found to be in the range 0.64-1.52%. In conclusion, the proposed TLC method was found to be precise, specific and accurate and can be used for identification and quantitative determination of L-dopa in herbal extract and its formulations.

  18. Continuous movement decoding using a target-dependent model with EMG inputs.

    PubMed

    Sachs, Nicholas A; Corbett, Elaine A; Miller, Lee E; Perreault, Eric J

    2011-01-01

    Trajectory-based models that incorporate target position information have been shown to accurately decode reaching movements from bio-control signals, such as muscle (EMG) and cortical activity (neural spikes). One major hurdle in implementing such models for neuroprosthetic control is that they are inherently designed to decode single reaches from a position of origin to a specific target. Gaze direction can be used to identify appropriate targets, however information regarding movement intent is needed to determine when a reach is meant to begin and when it has been completed. We used linear discriminant analysis to classify limb states into movement classes based on recorded EMG from a sparse set of shoulder muscles. We then used the detected state transitions to update target information in a mixture of Kalman filters that incorporated target position explicitly in the state, and used EMG activity to decode arm movements. Updating the target position initiated movement along new trajectories, allowing a sequence of appropriately timed single reaches to be decoded in series and enabling highly accurate continuous control.

  19. Phone camera detection of glucose blood level based on magnetic particles entrapped inside bubble wrap.

    PubMed

    Martinkova, Pavla; Pohanka, Miroslav

    2016-12-18

    Glucose is an important diagnostic biochemical marker of diabetes but also for organophosphates, carbamates, acetaminophens or salicylates poisoning. Hence, innovation of accurate and fast detection assay is still one of priorities in biomedical research. Glucose sensor based on magnetic particles (MPs) with immobilized enzymes glucose oxidase (GOx) and horseradish peroxidase (HRP) was developed and the GOx catalyzed reaction was visualized by a smart-phone-integrated camera. Exponential decay concentration curve with correlation coefficient 0.997 and with limit of detection 0.4 mmol/l was achieved. Interfering and matrix substances were measured due to possibility of assay influencing and no effect of the tested substances was observed. Spiked plasma samples were also measured and no influence of plasma matrix on the assay was proved. The presented assay showed complying results with reference method (standard spectrophotometry based on enzymes glucose oxidase and peroxidase inside plastic cuvettes) with linear dependence and correlation coefficient 0.999 in concentration range between 0 and 4 mmol/l. On the grounds of measured results, method was considered as highly specific, accurate and fast assay for detection of glucose.

  20. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  1. Accurate and agile digital control of optical phase, amplitude and frequency for coherent atomic manipulation of atomic systems.

    PubMed

    Thom, Joseph; Wilpers, Guido; Riis, Erling; Sinclair, Alastair G

    2013-08-12

    We demonstrate a system for fast and agile digital control of laser phase, amplitude and frequency for applications in coherent atomic systems. The full versatility of a direct digital synthesis radiofrequency source is faithfully transferred to laser radiation via acousto-optic modulation. Optical beatnotes are used to measure phase steps up to 2π, which are accurately implemented with a resolution of ≤ 10 mrad. By linearizing the optical modulation process, amplitude-shaped pulses of durations ranging from 500 ns to 500 ms, in excellent agreement with the programmed functional form, are demonstrated. Pulse durations are limited only by the 30 ns rise time of the modulation process, and a measured extinction ratio of > 5 × 10(11) is achieved. The system presented here was developed specifically for controlling the quantum state of trapped ions with sequences of multiple laser pulses, including composite and bichromatic pulses. The demonstrated techniques are widely applicable to other atomic systems ranging across quantum information processing, frequency metrology, atom interferometry, and single-photon generation.

  2. Self-consistent core-pedestal transport simulations with neural network accelerated models

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.

    2017-08-01

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.

  3. Therapeutic Drug Monitoring of Phenytoin by Simple, Rapid, Accurate, Highly Sensitive and Novel Method and Its Clinical Applications.

    PubMed

    Shaikh, Abdul S; Guo, Ruichen

    2017-01-01

    Phenytoin has very challenging pharmacokinetic properties. To prevent its toxicity and ensure efficacy, continuous therapeutic monitoring is required. It is hard to get a simple, accurate, rapid, easily available, economical and highly sensitive assay in one method for therapeutic monitoring of phenytoin. The present study is directed towards establishing and validating a simpler, rapid, an accurate, highly sensitive, novel and environment friendly liquid chromatography/mass spectrometry (LC/MS) method for offering rapid and reliable TDM results of phenytoin in epileptic patients to physicians and clinicians for making immediate and rational decision. 27 epileptics patients with uncontrolled seizures or suspected of non-compliance or toxicity of phenytoin were selected and advised for TDM of phenytoin by neurologists of Qilu Hospital Jinan, China. The LC/MS assay was used for performing of therapeutic monitoring of phenytoin. The Agilent 1100 LC/MS system was used for TDM. The mixture of Ammonium acetate 5mM: Methanol at (35: 65 v/v) was used for the composition of mobile phase. The Diamonsil C18 (150mm×4.6mm, 5μm) column was used for the extraction of analytes in plasma. The samples were prepared with one step simple protein precipitation method. The technique was validated with the guidelines of International Conference on Harmonisation (ICH). The calibration curve demonstrated decent linearity within (0.2-20 µg/mL) concentration range with linearity equation, y= 0.0667855 x +0.00241785 and correlation coefficient (R2) of 0.99928. The specificity, recovery, linearity, accuracy, precision and stability results were within the accepted limits. The concentration of 0.2 µg/mL was observed as lower limit of quantitation (LLOQ), which is 12.5 times lower than the currently available enzyme-multiplied immunoassay technique (EMIT) for measurement of phenytoin in epilepsy patients. A rapid, simple, economical, precise, highly sensitive and novel LC/MS assay has been established, validated and applied successfully in TDM of 27 epileptics patients. It was alarmingly found that TDM results of all these patients were out of safe range except two patients. However, it needs further evaluation. Besides TDM, the stated method can also be applied in bioequivalence, pharmacokinetics, toxicokinetics and pharmacovigilance studies. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. Accurate electronic and chemical properties of 3d transition metal oxides using a calculated linear response U and a DFT + U(V) method.

    PubMed

    Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R

    2015-04-14

    We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.

  5. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  6. The Columbia Thyroid Eye Disease-Compressive Optic Neuropathy Formula.

    PubMed

    Callahan, Alison B; Campbell, Ashley A; Oropesa, Susel; Baraban, Aryeh; Kazim, Michael

    2018-06-13

    Diagnosing thyroid eye disease-compressive optic neuropathy (TED-CON) is challenging, particularly in cases lacking a relative afferent pupillary defect. Large case series of TED-CON patients and accessible diagnostic tools are lacking in the current literature. This study aims to create a mathematical formula that accurately predicts the presence or absence of CON based on the most salient clinical measures of optic neuropathy. A retrospective case series compares 108 patients (216 orbits) with either unilateral or bilateral TED-CON and 41 age-matched patients (82 orbits) with noncompressive TED. Utilizing clinical variables assessing optic nerve function and/or risk of compressive disease, and with the aid of generalized linear regression modeling, the authors create a mathematical formula that weighs the relative contribution of each clinical variable in the overall prediction of CON. Data from 213 orbits in 110 patients derived the formula: y = -0.69 + 2.58 × (afferent pupillary defect) - 0.31 × (summed limitation of ductions) - 0.2 × (mean deviation on Humphrey visual field testing) - 0.02 × (% color plates). This accurately predicted the presence of CON (y > 0) versus non-CON (y < 0) in 82% of cases with 83% sensitivity and 81% specificity. When there was no relative afferent pupillary defect, which was the case in 63% of CON orbits, the formula correctly predicted CON in 78% of orbits with 73% sensitivity and 83% specificity. The authors developed a mathematical formula, the Columbia TED-CON Formula (CTD Formula), that can help guide clinicians in accurately diagnosing TED-CON, particularly in the presence of bilateral disease and when no relative afferent pupillary defect is present.

  7. The Role of Graphic Elements in the Accurate Portrayal of Instructional Design.

    ERIC Educational Resources Information Center

    Branch, Robert C.; Bloom, Janet R.

    This study explores the interpretation of two types of flow diagrams composed of different visual elements intended to communicate the same meaning. Using linear and cyclical diagrams, the study focused on whether, given a series of diagrams using linear elements and a series using cyclical elements, both types of visuals convey the same message…

  8. Predicting birth weight with conditionally linear transformation models.

    PubMed

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  9. Thermal Property Parameter Estimation of TPS Materials

    NASA Technical Reports Server (NTRS)

    Maddren, Jesse

    1998-01-01

    Accurate knowledge of the thermophysical properties of TPS (thermal protection system) materials is necessary for pre-flight design and post-flight data analysis. Thermal properties, such as thermal conductivity and the volumetric specific heat, can be estimated from transient temperature measurements using non-linear parameter estimation methods. Property values are derived by minimizing a functional of the differences between measured and calculated temperatures. High temperature thermal response testing of TPS materials is usually done in arc-jet or radiant heating facilities which provide a quasi one-dimensional heating environment. Last year, under the NASA-ASEE-Stanford Fellowship Program, my work focused on developing a radiant heating apparatus. This year, I have worked on increasing the fidelity of the experimental measurements, optimizing the experimental procedures and interpreting the data.

  10. ΛCDM Cosmology for Astronomers

    NASA Astrophysics Data System (ADS)

    Condon, J. J.; Matthews, A. M.

    2018-07-01

    The homogeneous, isotropic, and flat ΛCDM universe favored by observations of the cosmic microwave background can be described using only Euclidean geometry, locally correct Newtonian mechanics, and the basic postulates of special and general relativity. We present simple derivations of the most useful equations connecting astronomical observables (redshift, flux density, angular diameter, brightness, local space density, ...) with the corresponding intrinsic properties of distant sources (lookback time, distance, spectral luminosity, linear size, specific intensity, source counts, ...). We also present an analytic equation for lookback time that is accurate within 0.1% for all redshifts z. The exact equation for comoving distance is an elliptic integral that must be evaluated numerically, but we found a simple approximation with errors <0.2% for all redshifts up to z ≈ 50.

  11. Harmonic wavelet packet transform for on-line system health diagnosis

    NASA Astrophysics Data System (ADS)

    Yan, Ruqiang; Gao, Robert X.

    2004-07-01

    This paper presents a new approach to on-line health diagnosis of mechanical systems, based on the wavelet packet transform. Specifically, signals acquired from vibration sensors are decomposed into sub-bands by means of the discrete harmonic wavelet packet transform (DHWPT). Based on the Fisher linear discriminant criterion, features in the selected sub-bands are then used as inputs to three classifiers (Nearest Neighbor rule-based and two Neural Network-based), for system health condition assessment. Experimental results have confirmed that, comparing to the conventional approach where statistical parameters from raw signals are used, the presented approach enabled higher signal-to-noise ratio for more effective and intelligent use of the sensory information, thus leading to more accurate system health diagnosis.

  12. Solvers for the Cardiac Bidomain Equations

    PubMed Central

    Vigmond, E.J.; Weber dos Santos, R.; Prassl, A.J.; Deo, M.; Plank, G.

    2010-01-01

    The bidomain equations are widely used for the simulation of electrical activity in cardiac tissue. They are especially important for accurately modelling extracellular stimulation, as evidenced by their prediction of virtual electrode polarization before experimental verification. However, solution of the equations is computationally expensive due to the fine spatial and temporal discretization needed. This limits the size and duration of the problem which can be modeled. Regardless of the specific form into which they are cast, the computational bottleneck becomes the repeated solution of a large, linear system. The purpose of this review is to give an overview of the equations, and the methods by which they have been solved. Of particular note are recent developments in multigrid methods, which have proven to be the most efficient. PMID:17900668

  13. Validation of a CD1b tetramer assay for studies of human mycobacterial infection or vaccination.

    PubMed

    Layton, Erik D; Yu, Krystle K Q; Smith, Malisa T; Scriba, Thomas J; De Rosa, Stephen C; Seshadri, Chetan

    2018-07-01

    CD1 tetramers loaded with lipid antigens facilitate the identification of rare lipid-antigen specific T cells present in human blood and tissue. Because CD1 proteins are structurally non-polymorphic, these tetramers can be applied to genetically diverse human populations, unlike MHC-I and MHC-II tetramers. However, there are no standardized assays to quantify and characterize lipid antigen-specific T cells present within clinical samples. We incorporated CD1b tetramers loaded with the mycobacterial lipid glucose monomycolate (GMM) into a multi-parameter flow cytometry assay. Using a GMM-specific T-cell line, we demonstrate that the assay is linear, reproducible, repeatable, precise, accurate, and has a limit of detection of approximately 0.007%. Having formally validated this assay, we performed a cross-sectional study of healthy U.S. controls and South African adolescents with and without latent tuberculosis infection (LTBI). We show that GMM-specific T cells are specifically detected in South African subjects with LTBI and not in U.S. healthy controls. This assay can be expanded to include additional tetramers or phenotypic markers to characterize GMM-specific T cells in studies of mycobacterial infection, disease, or vaccination. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. WE-FG-207B-02: Material Reconstruction for Spectral Computed Tomography with Detector Response Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, J; Gao, H

    2016-06-15

    Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  15. [Cost variation in care groups?

    PubMed

    Mohnen, S M; Molema, C C M; Steenbeek, W; van den Berg, M J; de Bruin, S R; Baan, C A; Struijs, J N

    2017-01-01

    Is the simple mean of the costs per diabetes patient a suitable tool with which to compare care groups? Do the total costs of care per diabetes patient really give the best insight into care group performance? Cross-sectional, multi-level study. The 2009 insurance claims of 104,544 diabetes patients managed by care groups in the Netherlands were analysed. The data were obtained from Vektis care information centre. For each care group we determined the mean costs per patient of all the curative care and diabetes-specific hospital care using the simple mean method, then repeated it using the 'generalized linear mixed model'. We also calculated for which proportion the differences found could be attributed to the care groups themselves. The mean costs of the total curative care per patient were €3,092 - €6,546; there were no significant differences between care groups. The mixed model method resulted in less variation (€2,884 - €3,511), and there were a few significant differences. We found a similar result for diabetes-specific hospital care and the ranking position of the care groups proved to be dependent on the method used. The care group effect was limited, although it was greater in the diabetes-specific hospital costs than in the total costs of curative care (6.7% vs. 0.4%). The method used to benchmark care groups carries considerable weight. Simply stated, determining the mean costs of care (still often done) leads to an overestimation of the differences between care groups. The generalized linear mixed model is more accurate and yields better comparisons. However, the fact remains that 'total costs of care' is a faulty indicator since care groups have little impact on them. A more informative indicator is 'costs of diabetes-specific hospital care' as these costs are more influenced by care groups.

  16. Resonance Rayleigh scattering method for highly sensitive detection of chitosan using aniline blue as probe

    NASA Astrophysics Data System (ADS)

    Zhang, Weiai; Ma, Caijuan; Su, Zhengquan; Bai, Yan

    2016-11-01

    This paper describes a highly sensitive and accurate approach using aniline blue (AB) (water soluble) as a probe to determine chitosan (CTS) through Resonance Rayleigh scattering (RRS). Under optimum experimental conditions, the intensities of RRS were linearly proportional to the concentration of CTS in the range from 0.01 to 3.5 μg/mL, and the limit of detection (LOD) was 6.94 ng/mL. Therefore, a new and highly sensitive method based on RRS for the determination of CTS has been developed. Furthermore, the effect of molecular weight of CTS and the effect of the degree of deacetylation of CTS on the accurate quantification of CTS was studied. The experimental data was analyzed by linear regression analysis, which indicated that the molecular weight and the degree of deacetylation of CTS had no statistical significance and this method could be used to determine CTS accurately. Meanwhile, this assay was applied for CTS determination in health products with satisfactory results.

  17. [Calculating the stark broadening of welding arc spectra by Fourier transform method].

    PubMed

    Pan, Cheng-Gang; Hua, Xue-Ming; Zhang, Wang; Li, Fang; Xiao, Xiao

    2012-07-01

    It's the most effective and accurate method to calculate the electronic density of plasma by using the Stark width of the plasma spectrum. However, it's difficult to separate Stark width from the composite spectrum linear produced by several mechanisms. In the present paper, Fourier transform was used to separate the Lorentz linear from the spectrum observed, thus to get the accurate Stark width. And we calculated the distribution of the TIG welding arc plasma. This method does not need to measure arc temperature accurately, to measure the width of the plasma spectrum broadened by instrument, and has the function to reject the noise data. The results show that, on the axis, the electron density of TIG welding arc decreases with the distance from tungsten increasing, and changes from 1.21 X 10(17) cm(-3) to 1.58 x 10(17) cm(-3); in the radial, the electron density decreases with the distance from axis increasing, and near the tungsten zone the biggest electronic density is off axis.

  18. Dynamical discrete/continuum linear response shells theory of solvation: convergence test for NH4+ and OH- ions in water solution using DFT and DFTB methods.

    PubMed

    de Lima, Guilherme Ferreira; Duarte, Hélio Anderson; Pliego, Josefredo R

    2010-12-09

    A new dynamical discrete/continuum solvation model was tested for NH(4)(+) and OH(-) ions in water solvent. The method is similar to continuum solvation models in a sense that the linear response approximation is used. However, different from pure continuum models, explicit solvent molecules are included in the inner shell, which allows adequate treatment of specific solute-solvent interactions present in the first solvation shell, the main drawback of continuum models. Molecular dynamics calculations coupled with SCC-DFTB method are used to generate the configurations of the solute in a box with 64 water molecules, while the interaction energies are calculated at the DFT level. We have tested the convergence of the method using a variable number of explicit water molecules and it was found that even a small number of waters (as low as 14) are able to produce converged values. Our results also point out that the Born model, often used for long-range correction, is not reliable and our method should be applied for more accurate calculations.

  19. Simultaneous Determination of Withanolide A and Bacoside A in Spansules by High-Performance Thin-Layer Chromatography

    PubMed Central

    Shinde, P B; Aragade, P D; Agrawal, M R; Deokate, U A; Khadabadi, S S

    2011-01-01

    The objective of this work was to develop and validate a simple, rapid, precise, and accurate high performance thin layer chromatography method for simultaneous determination of withanolide A and bacoside A in combined dosage form. The stationary phase used was silica gel G60F254. The mobile phase used was mixture of ethyl acetate: methanol: toluene: water (4:1:1:0.5 v/v/v/v). The detection of spots was carried out at 320 nm using absorbance reflectance mode. The method was validated in terms of linearity, accuracy, precision and specificity. The calibration curve was found to be linear between 200 to 800 ng/spot for withanolide A and 50 to 350 ng/spot for bacoside A. The limit of detection and limit of quantification for the withanolide A were found to be 3.05 and 10.06 ng/spot, respectively and for bacoside A 8.3 and 27.39 ng/spot, respectively. The proposed method can be successfully used to determine the drug content of marketed formulation. PMID:22303073

  20. Simultaneous determination of withanolide a and bacoside a in spansules by high-performance thin-layer chromatography.

    PubMed

    Shinde, P B; Aragade, P D; Agrawal, M R; Deokate, U A; Khadabadi, S S

    2011-03-01

    The objective of this work was to develop and validate a simple, rapid, precise, and accurate high performance thin layer chromatography method for simultaneous determination of withanolide A and bacoside A in combined dosage form. The stationary phase used was silica gel G60F(254). The mobile phase used was mixture of ethyl acetate: methanol: toluene: water (4:1:1:0.5 v/v/v/v). The detection of spots was carried out at 320 nm using absorbance reflectance mode. The method was validated in terms of linearity, accuracy, precision and specificity. The calibration curve was found to be linear between 200 to 800 ng/spot for withanolide A and 50 to 350 ng/spot for bacoside A. The limit of detection and limit of quantification for the withanolide A were found to be 3.05 and 10.06 ng/spot, respectively and for bacoside A 8.3 and 27.39 ng/spot, respectively. The proposed method can be successfully used to determine the drug content of marketed formulation.

  1. Prediction of consonant recognition in quiet for listeners with normal and impaired hearing using an auditory model.

    PubMed

    Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas

    2014-03-01

    Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.

  2. Scattering matrix approach to the dissociative recombination of HCO{sup +} and N{sub 2}H{sup +}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca dos Santos, S.; Douguet, N.; Orel, A. E.

    We present a theoretical study of the indirect dissociative recombination of linear polyatomic ions at low collisional energies. The approach is based on the computation of the scattering matrix just above the ionization threshold and enables the explicit determination of all diabatic electronic couplings responsible for dissociative recombination. In addition, we use the multi-channel quantum-defect theory to demonstrate the precision of the scattering matrix by reproducing accurately ab initio Rydberg state energies of the neutral molecule. We consider the molecular ions N{sub 2}H{sup +} and HCO{sup +} as benchmark systems of astrophysical interest and improve former theoretical studies, which hadmore » repeatedly produced smaller cross sections than experimentally measured. Specifically, we demonstrate the crucial role of the previously overlooked stretching modes for linear polyatomic ions with large permanent dipole moment. The theoretical cross sections for both ions agree well with experimental data over a wide energy range. Finally, we consider the potential role of the HOC{sup +} isomer in the experimental cross sections of HCO{sup +} at energies below 10 meV.« less

  3. Modeling demand for public transit services in rural areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attaluri, P.; Seneviratne, P.N.; Javid, M.

    1997-05-01

    Accurate estimates of demand are critical for planning, designing, and operating public transit systems. Previous research has demonstrated that the expected demand in rural areas is a function of both demographic and transit system variables. Numerous models have been proposed to describe the relationship between the aforementioned variables. However, most of them are site specific and their validity over time and space is not reported or perhaps has not been tested. Moreover, input variables in some cases are extremely difficult to quantify. In this article, the estimation of demand using the generalized linear modeling technique is discussed. Two separate models,more » one for fixed-route and another for demand-responsive services, are presented. These models, calibrated with data from systems in nine different states, are used to demonstrate the appropriateness and validity of generalized linear models compared to the regression models. They explain over 70% of the variation in expected demand for fixed-route services and 60% of the variation in expected demand for demand-responsive services. It was found that the models are spatially transferable and that data for calibration are easily obtainable.« less

  4. Development and validation of a HPTLC method for simultaneous estimation of lornoxicam and thiocolchicoside in combined dosage form.

    PubMed

    Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A; Raut, Rahul P; Choudhari, Vishnu P; Kuchekar, Bhanudas S

    2011-07-01

    To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60-360 ng/band for LOR and 30-180 ng/band for THIO with correlation coefficients r(2) = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7-101.2 %. The proposed method was optimized and validated as per the ICH guidelines.

  5. Synthesis and spectral properties of Methyl-Phenyl pyrazoloquinoxaline fluorescence emitters: Experiment and DFT/TDDFT calculations

    NASA Astrophysics Data System (ADS)

    Gąsiorski, P.; Matusiewicz, M.; Gondek, E.; Uchacz, T.; Wojtasik, K.; Danel, A.; Shchur, Ya.; Kityk, A. V.

    2018-01-01

    Paper reports the synthesis and spectroscopic studies of two novel 1-Methyl-3-phenyl-1H-pyrazolo[3,4-b]quinoxaline (PQX) derivatives with 6-substituted methyl (MeMPPQX) or methoxy (MeOMPPQX) side groups. The optical absorption and fluorescence emission spectra are recorded in solvents of different polarity. Steady state and time-resolved spectroscopy provide photophysical characterization of MeMPPQX and MeOMPPQX dyes as materials for potential luminescence or electroluminescence applications. Measured optical absorption and fluorescence emission spectra are compared with quantum-chemical DFT/TDDFT calculations using long-range corrected xc-functionals, LRC-BLYP and CAM-B3LYP in combination with self-consistent reaction field model based on linear response (LR), state specific (SS) or corrected linear response (CLR) solvations. Performances of relevant theoretical models and approaches are compared. The reparameterized LRC-BLYP functional (ω = 0.231 Bohr-1) in combination with CLR solvation provides most accurate prediction of both excitation and emission energies. The MeMPPQX and MeOMPPQX dyes represent efficient fluorescence emitters in blue-green region of the visible spectra.

  6. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  7. Monthly monsoon rainfall forecasting using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ganti, Ravikumar

    2014-10-01

    Indian agriculture sector heavily depends on monsoon rainfall for successful harvesting. In the past, prediction of rainfall was mainly performed using regression models, which provide reasonable accuracy in the modelling and forecasting of complex physical systems. Recently, Artificial Neural Networks (ANNs) have been proposed as efficient tools for modelling and forecasting. A feed-forward multi-layer perceptron type of ANN architecture trained using the popular back-propagation algorithm was employed in this study. Other techniques investigated for modeling monthly monsoon rainfall include linear and non-linear regression models for comparison purposes. The data employed in this study include monthly rainfall and monthly average of the daily maximum temperature in the North Central region in India. Specifically, four regression models and two ANN model's were developed. The performance of various models was evaluated using a wide variety of standard statistical parameters and scatter plots. The results obtained in this study for forecasting monsoon rainfalls using ANNs have been encouraging. India's economy and agricultural activities can be effectively managed with the help of the availability of the accurate monsoon rainfall forecasts.

  8. Development and validation of an HPLC method to quantify camptothecin in polymeric nanocapsule suspensions.

    PubMed

    Granada, Andréa; Murakami, Fabio S; Sartori, Tatiane; Lemos-Senna, Elenara; Silva, Marcos A S

    2008-01-01

    A simple, rapid, and sensitive reversed-phase column high-performance liquid chromatographic method was developed and validated to quantify camptothecin (CPT) in polymeric nanocapsule suspensions. The chromatographic separation was performed on a Supelcosil LC-18 column (15 cm x 4.6 mm id, 5 microm) using a mobile phase consisting of methanol-10 mM KH2PO4 (60 + 40, v/v; pH 2.8) at a flow rate of 1.0 mL/min and ultraviolet detection at 254 nm. The calibration graph was linear from 0.5 to 3.0 microg/mL with a correlation coefficient of 0.9979, and the limit of quantitation was 0.35 microg/mL. The assay recovery ranged from 97.3 to 105.0%. The intraday and interday relative standard deviation values were < 5.0%. The validation results confirmed that the developed method is specific, linear, accurate, and precise for its intended use. The current method was successfully applied to the evaluation of CPT entrapment efficiency and drug content in polymeric nanocapsule suspensions during the early stage of formulation development.

  9. Muscle parameters estimation based on biplanar radiography.

    PubMed

    Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W

    2016-11-01

    The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.

  10. Rapid and sensitive gas chromatography ion-trap mass spectrometry method for the determination of tobacco specific N-nitrosamines in secondhand smoke

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SLEIMAN, Mohamad; MADDALENA, Randy L.; GUNDEL, Lara A.

    Tobacco-specific nitrosamines (TSNAs) are some of the most potent carcinogens in tobacco and cigarette smoke. Accurate quantification of these chemicals is needed to help assess public health risks. We developed and validated a specific and sensitive method to measure four TSNAs in both the gas- and particle-phase of secondhand smoke (SHS) using gas chromatography and ion-trap tandem mass spectrometry,. A smoking machine in an 18-m3 room-sized chamber generated relevant concentrations of SHS that were actively sampled on Teflon coated fiber glass (TCFG) filters, and passively sampled on cellulose substrates. A simple solid-liquid extraction protocol using methanol as solvent was successfullymore » applied to both filters with high recoveries ranging from 85 to 115percent. Tandem MS parameters were optimized to obtain the best sensitivity in terms of signal to-noise ratio (S/N) for the target compounds. For each TSNA, the major fragmentation pathways as well as ion structures were elucidated and compared with previously published data. The method showed excellent performances with a linear dynamic range between 2 and 1000 ng mL-1, low detection limits (S/N> 3) of 30-300 pg.ml-1 and precision with experimental errors below 10percent for all compounds. Moreover, no interfering peaks were observed indicating a high selectivity of MS/MS without the need for a sample clean up step. The sampling and analysis method provides a sensitive and accurate tool to detect and quantify traces of TSNA in SHS polluted indoor environments.« less

  11. Modeling the relationships between quality and biochemical composition of fatty liver in mule ducks.

    PubMed

    Theron, L; Cullere, M; Bouillier-Oudot, M; Manse, H; Dalle Zotte, A; Molette, C; Fernandez, X; Vitezica, Z G

    2012-09-01

    The fatty liver of mule ducks (i.e., French "foie gras") is the most valuable product in duck production systems. Its quality is measured by the technological yield, which is the opposite of the fat loss during cooking. The purpose of this study was to determine whether biochemical measures of fatty liver could be used to accurately predict the technological yield (TY). Ninety-one male mule ducks were bred, overfed, and slaughtered under commercial conditions. Fatty liver weight (FLW) and biochemical variables, such as DM, lipid (LIP), and protein content (PROT), were collected. To evaluate evidence for nonlinear fat loss during cooking, we compared regression models describing linear and nonlinear relations between biochemical measures and TY. We detected significantly greater (P = 0.02) linear relation between DM and TY. Our results indicate that LIP and PROT follow a different pattern (linear) than DM and showed that LIP and PROT are nonexclusive contributing factors to TY. Other components, such as carbohydrates, other than those measured in this study, could contribute to DM. Stepwise regression for TY was performed. The traditional model with FLW was tested. The results showed that the weight of the liver is of limited value in the determination of fat loss during cooking (R(2) = 0.14). The most accurate TY prediction equation included DM (in linear and quadratic terms), FLW, and PROT (R(2) = 0.43). Biochemical measures in the fatty liver were more accurate predictors of TY than FLW. The model is useful in commercial conditions because DM, PROT, and FLW are noninvasive measures.

  12. Equivalent linearization for fatigue life estimates of a nonlinear structure

    NASA Technical Reports Server (NTRS)

    Miles, R. N.

    1989-01-01

    An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.

  13. A 100-Year Review: Methods and impact of genetic selection in dairy cattle-From daughter-dam comparisons to deep learning algorithms.

    PubMed

    Weigel, K A; VanRaden, P M; Norman, H D; Grosu, H

    2017-12-01

    In the early 1900s, breed society herdbooks had been established and milk-recording programs were in their infancy. Farmers wanted to improve the productivity of their cattle, but the foundations of population genetics, quantitative genetics, and animal breeding had not been laid. Early animal breeders struggled to identify genetically superior families using performance records that were influenced by local environmental conditions and herd-specific management practices. Daughter-dam comparisons were used for more than 30 yr and, although genetic progress was minimal, the attention given to performance recording, genetic theory, and statistical methods paid off in future years. Contemporary (herdmate) comparison methods allowed more accurate accounting for environmental factors and genetic progress began to accelerate when these methods were coupled with artificial insemination and progeny testing. Advances in computing facilitated the implementation of mixed linear models that used pedigree and performance data optimally and enabled accurate selection decisions. Sequencing of the bovine genome led to a revolution in dairy cattle breeding, and the pace of scientific discovery and genetic progress accelerated rapidly. Pedigree-based models have given way to whole-genome prediction, and Bayesian regression models and machine learning algorithms have joined mixed linear models in the toolbox of modern animal breeders. Future developments will likely include elucidation of the mechanisms of genetic inheritance and epigenetic modification in key biological pathways, and genomic data will be used with data from on-farm sensors to facilitate precision management on modern dairy farms. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. A method to characterize average cervical spine ligament response based on raw data sets for implementation into injury biomechanics models.

    PubMed

    Mattucci, Stephen F E; Cronin, Duane S

    2015-01-01

    Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. The Linear Bicharacteristic Scheme for Computational Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Chan, Siew-Loong

    2000-01-01

    The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on electromagnetic wave propagation problems. This paper extends the Linear Bicharacteristic Scheme for computational electromagnetics to treat lossy dielectric and magnetic materials and perfect electrical conductors. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media, and treatment of perfect electrical conductors (PECs) are shown to follow directly in the limit of high conductivity. Heterogeneous media are treated through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for one-dimensional model problems on both uniform and nonuniform grids, and the FDTD algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has approximately one-third the phase velocity error. The LBS is also more accurate on nonuniform grids.

  16. Nonlocal kinetic energy functional from the jellium-with-gap model: Applications to orbital-free density functional theory

    NASA Astrophysics Data System (ADS)

    Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio

    2018-05-01

    Orbital-free density functional theory (OF-DFT) promises to describe the electronic structure of very large quantum systems, being its computational cost linear with the system size. However, the OF-DFT accuracy strongly depends on the approximation made for the kinetic energy (KE) functional. To date, the most accurate KE functionals are nonlocal functionals based on the linear-response kernel of the homogeneous electron gas, i.e., the jellium model. Here, we use the linear-response kernel of the jellium-with-gap model to construct a simple nonlocal KE functional (named KGAP) which depends on the band-gap energy. In the limit of vanishing energy gap (i.e., in the case of metals), the KGAP is equivalent to the Smargiassi-Madden (SM) functional, which is accurate for metals. For a series of semiconductors (with different energy gaps), the KGAP performs much better than SM, and results are close to the state-of-the-art functionals with sophisticated density-dependent kernels.

  17. Vibration Control Using a State Observer that Considers Disturbances of a Golf Swing Robot

    NASA Astrophysics Data System (ADS)

    Hoshino, Yohei; Kobayashi, Yukinori; Yamada, Gen

    In this paper, optimal control of a golf swing robot that is used to evaluate the performance of golf clubs is described. The robot has two joints, a rigid link and a flexible link that is a golf club. A mathematical model of the golf club is derived by Hamilton’s principle in consideration of bending and torsional stiffness and in consideration of eccentricity of the center of gravity of the club head on the shaft axis. A linear quadratic regulator (LQR) that considers the vibration of the club shaft is used to stop the robot during the follow-through. Since the robot moves fast and has strong non-linearity, an ordinary state observer for a linear system cannot accurately estimate the states of the system. A state observer that considers disturbances accurately estimates the state variables that cannot be measured. The results of numerical simulation are compared with experimental results obtained by using a swing robot.

  18. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less

  19. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  20. NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.

    PubMed

    Hinrichs, R N; McLean, S P

    1995-10-01

    This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.

  1. Biotransformation of lignan glycoside to its aglycone by Woodfordia fruticosa flowers: quantification of compounds using a validated HPTLC method.

    PubMed

    Mishra, Shikha; Aeri, Vidhu

    2017-12-01

    Saraca asoca Linn. (Caesalpiniaceae) is an important traditional remedy for gynaecological disorders and it contains lyoniside, an aryl tetralin lignan glycoside. The aglycone of lyoniside, lyoniresinol possesses structural similarity to enterolignan precursors which are established phytoestrogens. This work illustrates biotransformation of lyoniside to lyoniresinol using Woodfordia fruticosa Kurz. (Lythraceae) flowers and simultaneous quantification of lyoniside and lyoniresinol using a validated HPTLC method. The aqueous extract prepared from S. asoca bark was fermented using W. fruticosa flowers. The substrate and fermented product both were simultaneously analyzed using solvent system:toluene:ethyl acetate:formic acid (4:3:0.4) at 254 nm. The method was validated for specificity, accuracy, precision, linearity, sensitivity and robustness as per ICH guidelines. The substrate showed the presence of lyoniside, however, it decreased as the fermentation proceeded. On 3rd day, lyoniresinol starts appearing in the medium. In 8 days duration most of the lyoniside converted to lyoniresinol. The developed method was specific for lyoniside and lyoniresinol. Lyoniside and lyoniresinol showed linearity in the range of 250-3000 and 500-2500 ng. The method was accurate as resulted in 99.84% and 99.83% recovery, respectively, for lyoniside and lyoniresinol. Aryl tetralin lignan glycoside, lyoniside was successfully transformed into lyoniresinol using W. fruticosa flowers and their contents were simultaneously analyzed using developed validated HPTLC method.

  2. Measuring Solar Coronal Magnetism during the Total Solar Eclipse of 2017

    NASA Astrophysics Data System (ADS)

    Gibson, K. L.; Tomczyk, S.

    2017-12-01

    The total solar eclipse on August 21, 2017 provided a notable opportunity to measure the solar corona at specific emission wavelengths to gain information about coronal magnetic fields. Solar magnetic fields are intimately related to the generation of space weather and its effects on the earth, and the infrared imaging and polarization information collected on coronal emission lines here will enhance the scientific value of several other ongoing experiments, as well as benefit the astrophysics and upper atmosphere communities. Coronal measurements were collected during the 2 minute and 24 second totality period from Casper Mountain, WY. Computer-controlled telescopes automatically inserted four different narrow band pass filters to capture images in the visible range on a 4D PolCam, and in the infrared range on the FLIR 8501c camera. Each band pass filter selects a specific wavelength range that corresponds to a known coronal emission line possessing magnetic sensitivity. The 4D PolCam incorporated a novel grid of linear polarizers precisely aligned with the micron scale pixels. This allowed for direct measurement of the degree of linear polarization in a very small instrument with no external moving parts as is typically required. The FLIR offers short exposure times to freeze motion and output accurate thermal measurements. This allowed a new observation of the sun's corona using thermo infrared technology.

  3. Self-rated and observer-rated measures of well-being and distress in adolescence: an exploratory study.

    PubMed

    Vescovelli, Francesca; Albieri, Elisa; Ruini, Chiara

    2014-01-01

    The evaluation of eudaimonic well-being in adolescence is hampered by the lack of specific assessment tools. Moreover, with younger populations, the assessment of positive functioning may be biased by self-report data only, and may be more accurate by adding significant adults' evaluations. The objective of this research was to measure adolescents' well-being and prosocial behaviours using self-rated and observer-rated instruments, and their pattern of associations. The sample included 150 Italian high school adolescents. Observed-evaluation was performed by their school teachers using the Strengths and Difficulties Questionnaire. Adolescents completed Ryff's Psychological Well-being Scales and Symptom Questionnaire. Pearson' r correlations and Linear regression were performed. Self-rated dimensions of psychological well-being significantly correlated with all observer-rated dimensions, but Strengths and Difficulties Emotional symptom scale. Multiple linear regression showed that the self-rated dimensions Environmental Mastery and Personal Growth, and surprisingly not Positive Relations, are related to the observer-rated dimension Prosocial Behaviour. Adolescents with higher levels of well-being in specific dimensions tend to be perceived as less problematic by their teachers. However, some dimensions of positive functioning present discrepancies between self and observer-rated instruments. Thus, the conjunct use of self-reports and observer-rated tools for a more comprehensive assessment of students' eudaimonic well-being is recommended.

  4. Analogous modified DNA probe and immune competition method-based electrochemical biosensor for RNA modification.

    PubMed

    Dai, Tao; Pu, Qinli; Guo, Yongcan; Zuo, Chen; Bai, Shulian; Yang, Yujun; Yin, Dan; Li, Yi; Sheng, Shangchun; Tao, Yiyi; Fang, Jie; Yu, Wen; Xie, Guoming

    2018-08-30

    N6-methyladenosine (m6A), one of the most abundant RNA methylation which is ubiquitous in eukaryotic RNA, plays vital roles in many biological progresses. Therefore, the rapid and accurate quantitative detection of m6A is particularly important for its functional research. Herein, a label-free and highly selective electrochemical immunosensor was developed for the detection of m6A. The method is established on that the anti-m6A-Ab can recognize both m6A-RNA and m6A-DNA. An analogous modified DNA probe (L1) serves as a signal molecule, by competing with m6A-RNA for binding to Abs to broaden the linear range. The detection of m6A-RNA by this method is unaffected by the lengths and base sequences of RNA. Under optimal conditions, the proposed immunosensor presented a wide linear range from 0.05 to 200 nM with a detection limit as low as 0.016 nM (S/N = 3). The specificity and reproducibility of the method are satisfactory. Furthermore, the developed immunosensor was validated for m6A determination in human cell lines. Thus, the immunosensor provides a promising platform for m6A-RNA detection with simplicity, high specificity and sensitivity. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. RP-HPLC ANALYSIS OF ACIDIC AND BASIC DRUGS IN SYSTEMS WITH DIETHYLAMINE AS ELUENTS ADDITIVE.

    PubMed

    Petruczynik, Anna; Wroblewski, Karol; Strozek, Szymon; Waksmundzka-Hajnos, Monika

    2016-11-01

    The chromatographic behavior of some basic and acidic drugs was studied on Cl 8, Phenyl-Hexyl and Polar RP columns with methanol or acetonitrile as organic modifiers of aqueous mobile phases containing addition of diethylamine. Diethylamine plays a double function of silanol blocker reagent in analysis of basic drugs and ion-pair reagent in analysis of acidic drugs. Most symmetrical peaks and highest system efficiency were obtained on Phenyl-Hexyl and Polar RP columns in tested mobile phase systems compared to results obtained on C18 column. A new rapid, simple, specific and accurate reverse phase liquid chromatographic method was developed for the simultaneous determination of atorvastatin - antihyperlipidemic drug and amlodipine - calcium channel blocker in one pharmaceutical formulation. Atorvastatin is an acidic compounds while amlodipine is a basic substance. The chromatographic separation was carried out on Phenyl-Hexyl column by gradient elution mode with acetonitrile as organic modifier, acetate buffer at pH 3.5 and Q.025 M/L diethylamine. The proposed method was validated for specificity, precision, accuracy, linearity, and robustness. The linearity range of atorvastatin and amlodipine for 5 - 100 μg/mL was obtained with limits of-detection (LOD) 3.2750 gg/mL and 3.2102 μg/mL, respectively. The proposed method made use of DAD as a tool for peak identity and purity confirmation.

  6. Stress Degradation Studies on Varenicline Tartrate and Development of a Validated Stability-Indicating HPLC Method

    PubMed Central

    Pujeri, Sudhakar S.; Khader, Addagadde M. A.; Seetharamappa, Jaldappagari

    2012-01-01

    A simple, rapid and stability-indicating reversed-phase liquid chromatographic method was developed for the assay of varenicline tartrate (VRT) in the presence of its degradation products generated from forced decomposition studies. The HPLC separation was achieved on a C18 Inertsil column (250 mm × 4.6 mm i.d. particle size is 5 μm) employing a mobile phase consisting of ammonium acetate buffer containing trifluoroacetic acid (0.02M; pH 4) and acetonitrile in gradient program mode with a flow rate of 1.0 mL min−1. The UV detector was operated at 237 nm while column temperature was maintained at 40 °C. The developed method was validated as per ICH guidelines with respect to specificity, linearity, precision, accuracy, robustness and limit of quantification. The method was found to be simple, specific, precise and accurate. Selectivity of the proposed method was validated by subjecting the stock solution of VRT to acidic, basic, photolysis, oxidative and thermal degradation. The calibration curve was found to be linear in the concentration range of 0.1–192 μg mL−1 (R2 = 0.9994). The peaks of degradation products did not interfere with that of pure VRT. The utility of the developed method was examined by analyzing the tablets containing VRT. The results of analysis were subjected to statistical analysis. PMID:22396908

  7. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  8. A Method for Modeling the Intrinsic Dynamics of Intraindividual Variability: Recovering the Parameters of Simulated Oscillators in Multi-Wave Panel Data.

    ERIC Educational Resources Information Center

    Boker, Steven M.; Nesselroade, John R.

    2002-01-01

    Examined two methods for fitting models of intrinsic dynamics to intraindividual variability data by testing these techniques' behavior in equations through simulation studies. Among the main results is the demonstration that a local linear approximation of derivatives can accurately recover the parameters of a simulated linear oscillator, with…

  9. Application of a local linearization technique for the solution of a system of stiff differential equations associated with the simulation of a magnetic bearing assembly

    NASA Technical Reports Server (NTRS)

    Kibler, K. S.; Mcdaniel, G. A.

    1981-01-01

    A digital local linearization technique was used to solve a system of stiff differential equations which simulate a magnetic bearing assembly. The results prove the technique to be accurate, stable, and efficient when compared to a general purpose variable order Adams method with a stiff option.

  10. Small-Sample DIF Estimation Using Log-Linear Smoothing: A SIBTEST Application. Research Report. ETS RR-07-10

    ERIC Educational Resources Information Center

    Puhan, Gautam; Moses, Tim P.; Yu, Lei; Dorans, Neil J.

    2007-01-01

    The purpose of the current study was to examine whether log-linear smoothing of observed score distributions in small samples results in more accurate differential item functioning (DIF) estimates under the simultaneous item bias test (SIBTEST) framework. Data from a teacher certification test were analyzed using White candidates in the reference…

  11. Mechanisms Inducing Jet Rotation in Shear-Formed Shaped-Charge Liners.

    DTIC Science & Technology

    1990-03-01

    of deviatoric strain, and compressibility affects only the equation of state , not the deviatoric stress /strain relation. An anisotropic formulation is...strains, a more accurate scalar equation of state should simultaneously be employed to account for non-linear compressibility effects . A4 A.3 Elastic... obtainable knowing the previous and present cycles’ average stress . However, many non-linear equations

  12. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    ERIC Educational Resources Information Center

    Atas, Dogu; Karadag, Özge

    2017-01-01

    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  13. Extending the Coyote emulator to dark energy models with standard w {sub 0}- w {sub a} parametrization of the equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casarini, L.; Bonometto, S.A.; Tessarotto, E.

    2016-08-01

    We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less

  14. High-throughput quantitative biochemical characterization of algal biomass by NIR spectroscopy; multiple linear regression and multivariate linear regression analysis.

    PubMed

    Laurens, L M L; Wolfrum, E J

    2013-12-18

    One of the challenges associated with microalgal biomass characterization and the comparison of microalgal strains and conversion processes is the rapid determination of the composition of algae. We have developed and applied a high-throughput screening technology based on near-infrared (NIR) spectroscopy for the rapid and accurate determination of algal biomass composition. We show that NIR spectroscopy can accurately predict the full composition using multivariate linear regression analysis of varying lipid, protein, and carbohydrate content of algal biomass samples from three strains. We also demonstrate a high quality of predictions of an independent validation set. A high-throughput 96-well configuration for spectroscopy gives equally good prediction relative to a ring-cup configuration, and thus, spectra can be obtained from as little as 10-20 mg of material. We found that lipids exhibit a dominant, distinct, and unique fingerprint in the NIR spectrum that allows for the use of single and multiple linear regression of respective wavelengths for the prediction of the biomass lipid content. This is not the case for carbohydrate and protein content, and thus, the use of multivariate statistical modeling approaches remains necessary.

  15. A new method for the prediction of combustion instability

    NASA Astrophysics Data System (ADS)

    Flanagan, Steven Meville

    This dissertation presents a new approach to the prediction of combustion instability in solid rocket motors. Previous attempts at developing computational tools to solve this problem have been largely unsuccessful, showing very poor agreement with experimental results and having little or no predictive capability. This is due primarily to deficiencies in the linear stability theory upon which these efforts have been based. Recent advances in linear instability theory by Flandro have demonstrated the importance of including unsteady rotational effects, previously considered negligible. Previous versions of the theory also neglected corrections to the unsteady flow field of the first order in the mean flow Mach number. This research explores the stability implications of extending the solution to include these corrections. Also, the corrected linear stability theory based upon a rotational unsteady flow field extended to first order in mean flow Mach number has been implemented in two computer programs developed for the Macintosh platform. A quasi one-dimensional version of the program has been developed which is based upon an approximate solution to the cavity acoustics problem. The three-dimensional program applies Greens's Function Discretization (GFD) to the solution for the acoustic mode shapes and frequency. GFD is a recently developed numerical method for finding fully three dimensional solutions for this class of problems. The analysis of complex motor geometries, previously a tedious and time consuming task, has also been greatly simplified through the development of a drawing package designed specifically to facilitate the specification of typical motor geometries. The combination of the drawing package, improved acoustic solutions, and new analysis, results in a tool which is capable of producing more accurate and meaningful predictions than have been possible in the past.

  16. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging

    NASA Astrophysics Data System (ADS)

    Ladefoged, Claes N.; Benoit, Didier; Law, Ian; Holm, Søren; Kjær, Andreas; Højgaard, Liselotte; Hansen, Adam E.; Andersen, Flemming L.

    2015-10-01

    The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images, and to assign the correct linear attenuation coefficient to bone. The ultra-short echo time (UTE) MR sequence was proposed as a basis for MR-AC as this sequence shows a small signal in bone. The purpose of this study was to develop a new clinically feasible MR-AC method with patient specific continuous-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [18F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET images were evaluated in the whole brain, as well as regionally in the brain using a ROI-based analysis. Our method segments air, brain, cerebral spinal fluid, and soft tissue voxels on the unprocessed UTE TE images, and uses a mapping of R2* values to CT Hounsfield Units (HU) to measure the density in bone voxels. The average error of our method in the brain was 0.1% and less than 1.2% in any region of the brain. On average 95% of the brain was within  ±10% of PETCT, compared to 72% when using UTE. The proposed method is clinically feasible, reducing both the global and local errors on the reconstructed PET images, as well as limiting the number and extent of the outliers.

  17. Are ethnic and gender specific equations needed to derive fat free mass from bioelectrical impedance in children of South asian, black african-Caribbean and white European origin? Results of the assessment of body composition in children study.

    PubMed

    Nightingale, Claire M; Rudnicka, Alicja R; Owen, Christopher G; Donin, Angela S; Newton, Sian L; Furness, Cheryl A; Howard, Emma L; Gillings, Rachel D; Wells, Jonathan C K; Cook, Derek G; Whincup, Peter H

    2013-01-01

    Bioelectrical impedance analysis (BIA) is a potentially valuable method for assessing lean mass and body fat levels in children from different ethnic groups. We examined the need for ethnic- and gender-specific equations for estimating fat free mass (FFM) from BIA in children from different ethnic groups and examined their effects on the assessment of ethnic differences in body fat. Cross-sectional study of children aged 8-10 years in London Primary schools including 325 South Asians, 250 black African-Caribbeans and 289 white Europeans with measurements of height, weight and arm-leg impedance (Z; Bodystat 1500). Total body water was estimated from deuterium dilution and converted to FFM. Multilevel models were used to derive three types of equation {A: FFM = linear combination(height+weight+Z); B: FFM = linear combination(height(2)/Z); C: FFM = linear combination(height(2)/Z+weight)}. Ethnicity and gender were important predictors of FFM and improved model fit in all equations. The models of best fit were ethnicity and gender specific versions of equation A, followed by equation C; these provided accurate assessments of ethnic differences in FFM and FM. In contrast, the use of generic equations led to underestimation of both the negative South Asian-white European FFM difference and the positive black African-Caribbean-white European FFM difference (by 0.53 kg and by 0.73 kg respectively for equation A). The use of generic equations underestimated the positive South Asian-white European difference in fat mass (FM) and overestimated the positive black African-Caribbean-white European difference in FM (by 4.7% and 10.1% respectively for equation A). Consistent results were observed when the equations were applied to a large external data set. Ethnic- and gender-specific equations for predicting FFM from BIA provide better estimates of ethnic differences in FFM and FM in children, while generic equations can misrepresent these ethnic differences.

  18. Are Ethnic and Gender Specific Equations Needed to Derive Fat Free Mass from Bioelectrical Impedance in Children of South Asian, Black African-Caribbean and White European Origin? Results of the Assessment of Body Composition in Children Study

    PubMed Central

    Nightingale, Claire M.; Rudnicka, Alicja R.; Owen, Christopher G.; Donin, Angela S.; Newton, Sian L.; Furness, Cheryl A.; Howard, Emma L.; Gillings, Rachel D.; Wells, Jonathan C. K.; Cook, Derek G.; Whincup, Peter H.

    2013-01-01

    Background Bioelectrical impedance analysis (BIA) is a potentially valuable method for assessing lean mass and body fat levels in children from different ethnic groups. We examined the need for ethnic- and gender-specific equations for estimating fat free mass (FFM) from BIA in children from different ethnic groups and examined their effects on the assessment of ethnic differences in body fat. Methods Cross-sectional study of children aged 8–10 years in London Primary schools including 325 South Asians, 250 black African-Caribbeans and 289 white Europeans with measurements of height, weight and arm-leg impedance (Z; Bodystat 1500). Total body water was estimated from deuterium dilution and converted to FFM. Multilevel models were used to derive three types of equation {A: FFM = linear combination(height+weight+Z); B: FFM = linear combination(height2/Z); C: FFM = linear combination(height2/Z+weight)}. Results Ethnicity and gender were important predictors of FFM and improved model fit in all equations. The models of best fit were ethnicity and gender specific versions of equation A, followed by equation C; these provided accurate assessments of ethnic differences in FFM and FM. In contrast, the use of generic equations led to underestimation of both the negative South Asian-white European FFM difference and the positive black African-Caribbean-white European FFM difference (by 0.53 kg and by 0.73 kg respectively for equation A). The use of generic equations underestimated the positive South Asian-white European difference in fat mass (FM) and overestimated the positive black African-Caribbean-white European difference in FM (by 4.7% and 10.1% respectively for equation A). Consistent results were observed when the equations were applied to a large external data set. Conclusions Ethnic- and gender-specific equations for predicting FFM from BIA provide better estimates of ethnic differences in FFM and FM in children, while generic equations can misrepresent these ethnic differences. PMID:24204625

  19. Linear SFM: A hierarchical approach to solving structure-from-motion problems by decoupling the linear and nonlinear components

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Huang, Shoudong; Dissanayake, Gamini

    2018-07-01

    This paper presents a novel hierarchical approach to solving structure-from-motion (SFM) problems. The algorithm begins with small local reconstructions based on nonlinear bundle adjustment (BA). These are then joined in a hierarchical manner using a strategy that requires solving a linear least squares optimization problem followed by a nonlinear transform. The algorithm can handle ordered monocular and stereo image sequences. Two stereo images or three monocular images are adequate for building each initial reconstruction. The bulk of the computation involves solving a linear least squares problem and, therefore, the proposed algorithm avoids three major issues associated with most of the nonlinear optimization algorithms currently used for SFM: the need for a reasonably accurate initial estimate, the need for iterations, and the possibility of being trapped in a local minimum. Also, by summarizing all the original observations into the small local reconstructions with associated information matrices, the proposed Linear SFM manages to preserve all the information contained in the observations. The paper also demonstrates that the proposed problem formulation results in a sparse structure that leads to an efficient numerical implementation. The experimental results using publicly available datasets show that the proposed algorithm yields solutions that are very close to those obtained using a global BA starting with an accurate initial estimate. The C/C++ source code of the proposed algorithm is publicly available at https://github.com/LiangZhaoPKUImperial/LinearSFM.

  20. Multidimensional gas chromatography in combination with accurate mass, tandem mass spectrometry, and element-specific detection for identification of sulfur compounds in tobacco smoke.

    PubMed

    Ochiai, Nobuo; Mitsui, Kazuhisa; Sasamoto, Kikuo; Yoshimura, Yuta; David, Frank; Sandra, Pat

    2014-09-05

    A method is developed for identification of sulfur compounds in tobacco smoke extract. The method is based on large volume injection (LVI) of 10μL of tobacco smoke extract followed by selectable one-dimensional ((1)D) or two-dimensional ((2)D) gas chromatography (GC) coupled to a hybrid quadrupole time-of-flight mass spectrometer (Q-TOF-MS) using electron ionization (EI) and positive chemical ionization (PCI), with parallel sulfur chemiluminescence detection (SCD). In order to identify each individual sulfur compound, sequential heart-cuts of 28 sulfur fractions from (1)D GC to (2)D GC were performed with the three MS detection modes (SCD/EI-TOF-MS, SCD/PCI-TOF-MS, and SCD/PCI-Q-TOF-MS). Thirty sulfur compounds were positively identified by MS library search, linear retention indices (LRI), molecular mass determination using PCI accurate mass spectra, formula calculation using EI and PCI accurate mass spectra, and structure elucidation using collision activated dissociation (CAD) of the protonated molecule. Additionally, 11 molecular formulas were obtained for unknown sulfur compounds. The determined values of the identified and unknown sulfur compounds were in the range of 10-740ngmg total particulate matter (TPM) (RSD: 1.2-12%, n=3). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  2. Novel semi-automated kidney volume measurements in autosomal dominant polycystic kidney disease.

    PubMed

    Muto, Satoru; Kawano, Haruna; Isotani, Shuji; Ide, Hisamitsu; Horie, Shigeo

    2018-06-01

    We assessed the effectiveness and convenience of a novel semi-automatic kidney volume (KV) measuring high-speed 3D-image analysis system SYNAPSE VINCENT ® (Fuji Medical Systems, Tokyo, Japan) for autosomal dominant polycystic kidney disease (ADPKD) patients. We developed a novel semi-automated KV measurement software for patients with ADPKD to be included in the imaging analysis software SYNAPSE VINCENT ® . The software extracts renal regions using image recognition software and measures KV (VINCENT KV). The algorithm was designed to work with the manual designation of a long axis of a kidney including cysts. After using the software to assess the predictive accuracy of the VINCENT method, we performed an external validation study and compared accurate KV and ellipsoid KV based on geometric modeling by linear regression analysis and Bland-Altman analysis. Median eGFR was 46.9 ml/min/1.73 m 2 . Median accurate KV, Vincent KV and ellipsoid KV were 627.7, 619.4 ml (IQR 431.5-947.0) and 694.0 ml (IQR 488.1-1107.4), respectively. Compared with ellipsoid KV (r = 0.9504), Vincent KV correlated strongly with accurate KV (r = 0.9968), without systematic underestimation or overestimation (ellipsoid KV; 14.2 ± 22.0%, Vincent KV; - 0.6 ± 6.0%). There were no significant slice thickness-specific differences (p = 0.2980). The VINCENT method is an accurate and convenient semi-automatic method to measure KV in patients with ADPKD compared with the conventional ellipsoid method.

  3. A comparison between state-specific and linear-response formalisms for the calculation of vertical electronic transition energy in solution with the CCSD-PCM method.

    PubMed

    Caricato, Marco

    2013-07-28

    The calculation of vertical electronic transition energies of molecular systems in solution with accurate quantum mechanical methods requires the use of approximate and yet reliable models to describe the effect of the solvent on the electronic structure of the solute. The polarizable continuum model (PCM) of solvation represents a computationally efficient way to describe this effect, especially when combined with coupled cluster (CC) methods. Two formalisms are available to compute transition energies within the PCM framework: State-Specific (SS) and Linear-Response (LR). The former provides a more complete account of the solute-solvent polarization in the excited states, while the latter is computationally very efficient (i.e., comparable to gas phase) and transition properties are well defined. In this work, I review the theory for the two formalisms within CC theory with a focus on their computational requirements, and present the first implementation of the LR-PCM formalism with the coupled cluster singles and doubles method (CCSD). Transition energies computed with LR- and SS-CCSD-PCM are presented, as well as a comparison between solvation models in the LR approach. The numerical results show that the two formalisms provide different absolute values of transition energy, but similar relative solvatochromic shifts (from nonpolar to polar solvents). The LR formalism may then be used to explore the solvent effect on multiple states and evaluate transition probabilities, while the SS formalism may be used to refine the description of specific states and for the exploration of excited state potential energy surfaces of solvated systems.

  4. A Planar Quasi-Static Constraint Mode Tire Model

    DTIC Science & Technology

    2015-07-10

    strikes a balance between simple tire models that lack the fidelity to make accurate chassis load predictions and computationally intensive models that...strikes a balance between heuristic tire models (such as a linear point-follower) that lack the fidelity to make accurate chassis load predictions...UNCLASSIFIED: Distribution Statement A. Cleared for public release A PLANAR QUASI-STATIC CONSTRAINT MODE TIRE MODEL Rui Maa John B. Ferris

  5. Excited states with internally contracted multireference coupled-cluster linear response theory.

    PubMed

    Samanta, Pradipta Kumar; Mukherjee, Debashis; Hanauer, Matthias; Köhn, Andreas

    2014-04-07

    In this paper, the linear response (LR) theory for the variant of internally contracted multireference coupled cluster (ic-MRCC) theory described by Hanauer and Köhn [J. Chem. Phys. 134, 204211 (2011)] has been formulated and implemented for the computation of the excitation energies relative to a ground state of pronounced multireference character. We find that straightforward application of the linear-response formalism to the time-averaged ic-MRCC Lagrangian leads to unphysical second-order poles. However, the coupling matrix elements that cause this behavior are shown to be negligible whenever the internally contracted approximation as such is justified. Hence, for the numerical implementation of the method, we adopt a Tamm-Dancoff-type approximation and neglect these couplings. This approximation is also consistent with an equation-of-motion based derivation, which neglects these couplings right from the start. We have implemented the linear-response approach in the ic-MRCC singles-and-doubles framework and applied our method to calculate excitation energies for a number of molecules ranging from CH2 to p-benzyne and conjugated polyenes (up to octatetraene). The computed excitation energies are found to be very accurate, even for the notoriously difficult case of doubly excited states. The ic-MRCC-LR theory is also applicable to systems with open-shell ground-state wavefunctions and is by construction not biased towards a particular reference determinant. We have also compared the linear-response approach to the computation of energy differences by direct state-specific ic-MRCC calculations. We finally compare to Mk-MRCC-LR theory for which spurious roots have been reported [T.-C. Jagau and J. Gauss, J. Chem. Phys. 137, 044116 (2012)], being due to the use of sufficiency conditions to solve the Mk-MRCC equations. No such problem is present in ic-MRCC-LR theory.

  6. Characterization of a signal recording system for accurate velocity estimation using a VISAR

    NASA Astrophysics Data System (ADS)

    Rav, Amit; Joshi, K. D.; Singh, Kulbhushan; Kaushik, T. C.

    2018-02-01

    The linearity of a signal recording system (SRS) in time as well as in amplitude are important for the accurate estimation of the free surface velocity history of a moving target during shock loading and unloading when measured using optical interferometers such as a velocity interferometer system for any reflector (VISAR). Signal recording being the first step in a long sequence of signal processes, the incorporation of errors due to nonlinearity, and low signal-to-noise ratio (SNR) affects the overall accuracy and precision of the estimation of velocity history. In shock experiments the small duration (a few µs) of loading/unloading, the reflectivity of moving target surface, and the properties of optical components, control the amount of input of light to the SRS of a VISAR and this in turn affects the linearity and SNR of the overall measurement. These factors make it essential to develop in situ procedures for (i) minimizing the effect of signal induced noise and (ii) determine the linear region of operation for the SRS. Here we report on a procedure for the optimization of SRS parameters such as photodetector gain, optical power, aperture etc, so as to achieve a linear region of operation with a high SNR. The linear region of operation so determined has been utilized successfully to estimate the temporal history of the free surface velocity of the moving target in shock experiments.

  7. Relativistic electron scattering by magnetosonic waves: Effects of discrete wave emission and high wave amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Artemyev, A. V., E-mail: ante0226@gmail.com; Mourenas, D.; Krasnoselskikh, V. V.

    2015-06-15

    In this paper, we study relativistic electron scattering by fast magnetosonic waves. We compare results of test particle simulations and the quasi-linear theory for different spectra of waves to investigate how a fine structure of the wave emission can influence electron resonant scattering. We show that for a realistically wide distribution of wave normal angles θ (i.e., when the dispersion δθ≥0.5{sup °}), relativistic electron scattering is similar for a wide wave spectrum and for a spectrum consisting in well-separated ion cyclotron harmonics. Comparisons of test particle simulations with quasi-linear theory show that for δθ>0.5{sup °}, the quasi-linear approximation describes resonantmore » scattering correctly for a large enough plasma frequency. For a very narrow θ distribution (when δθ∼0.05{sup °}), however, the effect of a fine structure in the wave spectrum becomes important. In this case, quasi-linear theory clearly fails in describing accurately electron scattering by fast magnetosonic waves. We also study the effect of high wave amplitudes on relativistic electron scattering. For typical conditions in the earth's radiation belts, the quasi-linear approximation cannot accurately describe electron scattering for waves with averaged amplitudes >300 pT. We discuss various applications of the obtained results for modeling electron dynamics in the radiation belts and in the Earth's magnetotail.« less

  8. Highly sensitive and specific colorimetric detection of cancer cells via dual-aptamer target binding strategy.

    PubMed

    Wang, Kun; Fan, Daoqing; Liu, Yaqing; Wang, Erkang

    2015-11-15

    Simple, rapid, sensitive and specific detection of cancer cells is of great importance for early and accurate cancer diagnostics and therapy. By coupling nanotechnology and dual-aptamer target binding strategies, we developed a colorimetric assay for visually detecting cancer cells with high sensitivity and specificity. The nanotechnology including high catalytic activity of PtAuNP and magnetic separation & concentration plays a vital role on the signal amplification and improvement of detection sensitivity. The color change caused by small amount of target cancer cells (10 cells/mL) can be clearly distinguished by naked eyes. The dual-aptamer target binding strategy guarantees the detection specificity that large amount of non-cancer cells and different cancer cells (10(4) cells/mL) cannot cause obvious color change. A detection limit as low as 10 cells/mL with detection linear range from 10 to 10(5) cells/mL was reached according to the experimental detections in phosphate buffer solution as well as serum sample. The developed enzyme-free and cost effective colorimetric assay is simple and no need of instrument while still provides excellent sensitivity, specificity and repeatability, having potential application on point-of-care cancer diagnosis. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Simulated bi-SQUID Arrays Performing Direction Finding

    DTIC Science & Technology

    2015-09-01

    First, we applied the multiple signal classification ( MUSIC ) algorithm on linearly polarized signals. We included multiple signals in the output...both of the same frequency and different fre- quencies. Next, we explored a modified MUSIC algorithm called dimensionality reduction MUSIC (DR- MUSIC ... MUSIC algorithm is able to determine the AoA from the simulated SQUID data for linearly polarized signals. The MUSIC algorithm could accurately find

  10. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  11. The instantaneous linear motion information measurement method based on inertial sensors for ships

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Huang, Jing; Gao, Chen; Quan, Wei; Li, Ming; Zhang, Yanshun

    2018-05-01

    Ship instantaneous line motion information is the important foundation for ship control, which needs to be measured accurately. For this purpose, an instantaneous line motion measurement method based on inertial sensors is put forward for ships. By introducing a half-fixed coordinate system to realize the separation between instantaneous line motion and ship master movement, the instantaneous line motion acceleration of ships can be obtained with higher accuracy. Then, the digital high-pass filter is applied to suppress the velocity error caused by the low frequency signal such as schuler period. Finally, the instantaneous linear motion displacement of ships can be measured accurately. Simulation experimental results show that the method is reliable and effective, and can realize the precise measurement of velocity and displacement of instantaneous line motion for ships.

  12. Charge-based MOSFET model based on the Hermite interpolation polynomial

    NASA Astrophysics Data System (ADS)

    Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt

    2017-04-01

    An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.

  13. Fast and local non-linear evolution of steep wave-groups on deep water: A comparison of approximate models to fully non-linear simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adcock, T. A. A.; Taylor, P. H.

    2016-01-15

    The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest whichmore » leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum.« less

  14. Capture cross sections on unstable nuclei

    NASA Astrophysics Data System (ADS)

    Tonchev, A. P.; Escher, J. E.; Scielzo, N.; Bedrossian, P.; Ilieva, R. S.; Humby, P.; Cooper, N.; Goddard, P. M.; Werner, V.; Tornow, W.; Rusev, G.; Kelley, J. H.; Pietralla, N.; Scheck, M.; Savran, D.; Löher, B.; Yates, S. W.; Crider, B. P.; Peters, E. E.; Tsoneva, N.; Goriely, S.

    2017-09-01

    Accurate neutron-capture cross sections on unstable nuclei near the line of beta stability are crucial for understanding the s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. Essential ingredients for describing the γ decays following neutron capture are the γ-ray strength function and level densities. We will compare different indirect approaches for obtaining the most relevant observables that can constrain Hauser-Feshbach statistical-model calculations of capture cross sections. Specifically, we will consider photon scattering using monoenergetic and 100% linearly polarized photon beams. Challenges that exist on the path to obtaining neutron-capture cross sections for reactions on isotopes near and far from stability will be discussed.

  15. A summary of the OV1-19 satellite dose, depth dose, and linear energy transfer spectral measurements

    NASA Technical Reports Server (NTRS)

    Cervini, J. T.

    1972-01-01

    Measurements of the biophysical and physical parameters in the near earth space environment, specifically, the Inner Van Allen Belt are discussed. This region of space is of great interest to planners of the Skylab and the Space Station programs because of the high energy proton environment, especially during periods of increased solar activity. Many physical measurements of charged particle flux, spectra, and pitch angle distribution have been conducted and are programmed in the space radiation environment. Such predictions are not sufficient to accurately predict the effects of space radiations on critical biological and electronic systems operating in these environments. Some of the difficulties encountered in transferring from physical data to a prediction of the effects of space radiation on operational systems are discussed.

  16. [Simultaneous determination of five active constitutents in Xiaochaihu Tang by HPLC].

    PubMed

    Liu, Qingchun; Zhao, Junning; Yan, Liangchun; Yi, Jinhai; Song, Jun

    2010-03-01

    To establish a HPLC-PDA method for the determination of baicalin, wogonoside, baicalein, wogonin and glycyrrhizic acid in Xiaochaihu Tang. A Symmetry Shield RP18 (4.6 mm x 250 mm, 5.0 microm) was used with a mobile phase of acetonitrile-0.01% H3PO4 in gradient elution. The detection wavelength was 251 nm,the flow rate was 0.45 mL x min(-1) and the column temperature was maintained at 30 degrees C. The accuracy, precision, sensitivity, specificity and linearity of this method met the requirements. The contents of the five effective fractions were determined simultaneously. The method is rapid,simple and accurate and it can be suitable for the determination of baicalin, wogonoside, baicalein, wogonin and glycyrrhizic acid in Xiaochaihu Tang simultaneously.

  17. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, Peter G., E-mail: maginot1@llnl.gov; Ragusa, Jean C., E-mail: jean.ragusa@tamu.edu; Morel, Jim E., E-mail: morel@tamu.edu

    This work presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  18. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  19. STUDIES ON ANALYTICAL METHODS FOR TRACE ELEMENTS IN METALS BY USING RADIOACTIVE ISOTOPE. III. DETERMINATION OF TANTALUM BY MEANS OF ISOTOPE DILUTION METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amano, H.

    1959-10-01

    The determination of tantalum by the isotope dilution method in the presence of niobium was investigated by the use of the radioisotope Ta/sup 185/. Tantalum was separated from niobium as tantalum-tannin precipitate under the optimum conditions of a pH of 1.9 to 2.5 and a tantalum/niobium ratio of up to 1/ 50. If niobium was present in amounts 100 times or more that of tantalum, reprecipitation was needed. The reciprocal of the specific activity of tanthlum pentoxide precipitate was in a linear relation to the change in the amount of tantalum added. The recommended method gave an accurate result inmore » the determination of tantalum in steal. (auth)« less

  20. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE PAGES

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    2016-09-29

    This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  1. Improving the sensitivity and specificity of a bioanalytical assay for the measurement of certolizumab pegol.

    PubMed

    Smeraglia, John; Silva, John-Paul; Jones, Kieran

    2017-08-01

    In order to evaluate placental transfer of certolizumab pegol (CZP), a more sensitive and selective bioanalytical assay was required to accurately measure low CZP concentrations in infant and umbilical cord blood. Results & methodology: A new electrochemiluminescence immunoassay was developed to measure CZP levels in human plasma. Validation experiments demonstrated improved selectivity (no matrix interference observed) and a detection range of 0.032-5.0 μg/ml. Accuracy and precision met acceptance criteria (mean total error ≤20.8%). Dilution linearity and sample stability were acceptable and sufficient to support the method. The electrochemiluminescence immunoassay was validated for measuring low CZP concentrations in human plasma. The method demonstrated a more than tenfold increase in sensitivity compared with previous assays, and improved selectivity for intact CZP.

  2. Determination of Betaine in Jujube by Capillary Electrophoresis

    NASA Astrophysics Data System (ADS)

    Han, Likun; Liu, Haixing; Peng, Xuewei

    2017-12-01

    This paper presents the determination of betaine content in jujube by high performance capillary electrophoresis (HPCE) method. The borax solution was chosen as buffer solution, and its concentration was 40 mmol at a constant voltage of 20kV and injecting pressure time of 10s at 14°C. Linearity was kept in the concent ration range of 0.0113∼1.45mg of betaine with correlation coefficient of 0.9. The content of betaine in jujube was 85.91 mg/g (RSD = 16.6%) (n = 6). The recovery of betaine in jujube sample was in the range of 86.2% - 116.6% (n=3). This method is specific, simple and rapid and accurate, which is suitable for the detection of the content of betaine in jujube.

  3. Determination of Betaine in Lycii Cortex by Capillary Electrophoresis

    NASA Astrophysics Data System (ADS)

    Peng, Xuewei; Liu, Haixing

    2017-12-01

    This paper presents the determination of betaine content in Lycii Cortex by high performance capillary electrophoresis (HPCE) method. The borax solution was chosen as buffer solution, and its concentration was 40 mmol at a constant voltage of 20kV and injecting pressure time of 10s at 14°C. Linearity was kept in the concent ration range of 0.0113∼1.45mg of betaine with correlation coefficient of 0.9. The content of betaine in Lycii Cortex was 61.9 mg/g (RSD = 13.4%) (n = 7). The recovery was in the range of 86.6% - 118.1% (n=4). This method is specific, simple and rapid and accurate, which is suitable for the detection of the content of betaine in Lycii Cortex.

  4. Determination of Betaine in Lycium Barbarum L. by High Performance Capillary Electrophoresis

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Wang, Chunyan; Peng, Xuewei

    2017-12-01

    This paper presents the determination of betaine content in Lycium barbarum L. by high performance capillary electrophoresis (HPCE) method. The borax solution was chosen as buffer solution, and its concentration was 40 mmol at a constant voltage of 20kV and injecting pressure time of 10s at 20°C. Linearity was kept in the concent ration range of 0.0113∼1.45mg of betaine with correlation coefficient of 0.9. The recovery was in the range of 97.95%∼126% (n=4). The sample content of betaine was 29.3mg/g and RSD 6.4% (n=6). This method is specific, simple and rapid and accurate, which is suitable for the detection of the content of betaine in Lycium barbarum L.

  5. Force-Field Prediction of Materials Properties in Metal-Organic Frameworks

    PubMed Central

    2016-01-01

    In this work, MOF bulk properties are evaluated and compared using several force fields on several well-studied MOFs, including IRMOF-1 (MOF-5), IRMOF-10, HKUST-1, and UiO-66. It is found that, surprisingly, UFF and DREIDING provide good values for the bulk modulus and linear thermal expansion coefficients for these materials, excluding those that they are not parametrized for. Force fields developed specifically for MOFs including UFF4MOF, BTW-FF, and the DWES force field are also found to provide accurate values for these materials’ properties. While we find that each force field offers a moderately good picture of these properties, noticeable deviations can be observed when looking at properties sensitive to framework vibrational modes. This observation is more pronounced upon the introduction of framework charges. PMID:28008758

  6. In vivo classification of human skin burns using machine learning and quantitative features captured by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Singla, Neeru; Srivastava, Vishal; Singh Mehta, Dalip

    2018-02-01

    We report the first fully automated detection of human skin burn injuries in vivo, with the goal of automatic surgical margin assessment based on optical coherence tomography (OCT) images. Our proposed automated procedure entails building a machine-learning-based classifier by extracting quantitative features from normal and burn tissue images recorded by OCT. In this study, 56 samples (28 normal, 28 burned) were imaged by OCT and eight features were extracted. A linear model classifier was trained using 34 samples and 22 samples were used to test the model. Sensitivity of 91.6% and specificity of 90% were obtained. Our results demonstrate the capability of a computer-aided technique for accurately and automatically identifying burn tissue resection margins during surgical treatment.

  7. Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Brasunas, J.; Mamoutkine, A.; Gorius, N.

    2016-01-01

    Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.

  8. Energy-switching potential energy surface for the water molecule revisited: A highly accurate singled-sheeted form.

    PubMed

    Galvão, B R L; Rodrigues, S P J; Varandas, A J C

    2008-07-28

    A global ab initio potential energy surface is proposed for the water molecule by energy-switching/merging a highly accurate isotope-dependent local potential function reported by Polyansky et al. [Science 299, 539 (2003)] with a global form of the many-body expansion type suitably adapted to account explicitly for the dynamical correlation and parametrized from extensive accurate multireference configuration interaction energies extrapolated to the complete basis set limit. The new function mimics also the complicated Sigma/Pi crossing that arises at linear geometries of the water molecule.

  9. Back in the saddle: large-deviation statistics of the cosmic log-density field

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.

    2016-08-01

    We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.

  10. A new accurate quadratic equation model for isothermal gas chromatography and its comparison with the linear model

    PubMed Central

    Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.

    2013-01-01

    The gas holdup time (tM) is a dominant parameter in gas chromatographic retention models. The difference equation (DE) model proposed by Wu et al. (J. Chromatogr. A 2012, http://dx.doi.org/10.1016/j.chroma.2012.07.077) excluded tM. In the present paper, we propose that the relationship between the adjusted retention time tRZ′ and carbon number z of n-alkanes follows a quadratic equation (QE) when an accurate tM is obtained. This QE model is the same as or better than the DE model for an accurate expression of the retention behavior of n-alkanes and model applications. The QE model covers a larger range of n-alkanes with better curve fittings than the linear model. The accuracy of the QE model was approximately 2–6 times better than the DE model and 18–540 times better than the LE model. Standard deviations of the QE model were approximately 2–3 times smaller than those of the DE model. PMID:22989489

  11. Forecasting daily patient volumes in the emergency department.

    PubMed

    Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L

    2008-02-01

    Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.

  12. Melanoma metastases in regional lymph nodes are accurately detected by proton magnetic resonance spectroscopy of fine-needle aspirate biopsy samples.

    PubMed

    Stretch, Jonathan R; Somorjai, Ray; Bourne, Roger; Hsiao, Edward; Scolyer, Richard A; Dolenko, Brion; Thompson, John F; Mountford, Carolyn E; Lean, Cynthia L

    2005-11-01

    Nonsurgical assessment of sentinel nodes (SNs) would offer advantages over surgical SN excision by reducing morbidity and costs. Proton magnetic resonance spectroscopy (MRS) of fine-needle aspirate biopsy (FNAB) specimens identifies melanoma lymph node metastases. This study was undertaken to determine the accuracy of the MRS method and thereby establish a basis for the future development of a nonsurgical technique for assessing SNs. FNAB samples were obtained from 118 biopsy specimens from 77 patients during SN biopsy and regional lymphadenectomy. The specimens were histologically evaluated and correlated with MRS data. Histopathologic analysis established that 56 specimens contained metastatic melanoma and that 62 specimens were benign. A linear discriminant analysis-based classifier was developed for benign tissues and metastases. The presence of metastatic melanoma in lymph nodes was predicted with a sensitivity of 92.9%, a specificity of 90.3%, and an accuracy of 91.5% in a primary data set. In a second data set that used FNAB samples separate from the original tissue samples, melanoma metastases were predicted with a sensitivity of 87.5%, a specificity of 90.3%, and an accuracy of 89.1%, thus supporting the reproducibility of the method. Proton MRS of FNAB samples may provide a robust and accurate diagnosis of metastatic disease in the regional lymph nodes of melanoma patients. These data indicate the potential for SN staging of melanoma without surgical biopsy and histopathological evaluation.

  13. Optimal control of build height utilizing optical profilometry in cold spray deposits

    NASA Astrophysics Data System (ADS)

    Chakraborty, Abhijit; Shishkin, Sergey; Birnkrant, Michael J.

    2017-04-01

    Part-to-part variability and poor part quality due to failure to maintain geometric specifications pose a challenge for adopting Additive Manufacturing (AM) as a viable manufacturing process. In recent years, In-process Monitoring and Control (InPMC) has received a lot of attention as an approach to overcome these obstacles. The ability to sense geometry of the deposited layers accurately enables effective process monitoring and control of AM application. This paper demonstrates an application of geometry sensing technique for the coating deposition Cold Spray process, where solid powders are accelerated through a nozzle, collides with the substrate and adheres to it. Often the deposited surface has shape irregularities. This paper proposes an approach to suppress the iregularities by controlling the deposition height. An analytical control-oriented model is developed that expresses the resulting height of deposit as an integral function of nozzle velocity and angle. In order to obtain height information at each layer, a Micro-Epsilon laser line scanner was used for surface profiling after each deposition. This surface profile information, specifically the layer height, was then fed back to an optimal control algorithm which manipulated the nozzle speed to control the layer height to a pre specified height. While the problem is heavily nonlinear, we were able to transform it into equivalent Optimal Control problem linear w.r.t. input. That enabled development of two solution methods: one is fast and approximate, while another is more accurate but still efficient.

  14. Warping of a computerized 3-D atlas to match brain image volumes for quantitative neuroanatomical and functional analysis

    NASA Astrophysics Data System (ADS)

    Evans, Alan C.; Dai, Weiqian; Collins, D. Louis; Neelin, Peter; Marrett, Sean

    1991-06-01

    We describe the implementation, experience and preliminary results obtained with a 3-D computerized brain atlas for topographical and functional analysis of brain sub-regions. A volume-of-interest (VOI) atlas was produced by manual contouring on 64 adjacent 2 mm-thick MRI slices to yield 60 brain structures in each hemisphere which could be adjusted, originally by global affine transformation or local interactive adjustments, to match individual MRI datasets. We have now added a non-linear deformation (warp) capability (Bookstein, 1989) into the procedure for fitting the atlas to the brain data. Specific target points are identified in both atlas and MRI spaces which define a continuous 3-D warp transformation that maps the atlas on to the individual brain image. The procedure was used to fit MRI brain image volumes from 16 young normal volunteers. Regional volume and positional variability were determined, the latter in such a way as to assess the extent to which previous linear models of brain anatomical variability fail to account for the true variation among normal individuals. Using a linear model for atlas deformation yielded 3-D fits of the MRI data which, when pooled across subjects and brain regions, left a residual mis-match of 6 - 7 mm as compared to the non-linear model. The results indicate a substantial component of morphometric variability is not accounted for by linear scaling. This has profound implications for applications which employ stereotactic coordinate systems which map individual brains into a common reference frame: quantitative neuroradiology, stereotactic neurosurgery and cognitive mapping of normal brain function with PET. In the latter case, the combination of a non-linear deformation algorithm would allow for accurate measurement of individual anatomic variations and the inclusion of such variations in inter-subject averaging methodologies used for cognitive mapping with PET.

  15. Ultrasonographic Evaluation of Zone II Partial Flexor Tendon Lacerations of the Fingers: A Cadaveric Study.

    PubMed

    Kazmers, Nikolas H; Gordon, Joshua A; Buterbaugh, Kristen L; Bozentka, David J; Steinberg, David R; Khoury, Viviane

    2018-04-01

    Accurate assessment of zone II partial flexor tendon lacerations in the finger is clinically important. Surgical repair is recommended for lacerations of greater than 50% to 60%. Our goal was to evaluate ultrasonographic test characteristics and accuracy in identifying partial flexor tendon lacerations in a cadaveric model. From fresh-frozen above-elbow human cadaveric specimens, 32 flexor digitorum profundus tendons were randomly selected to remain intact or receive low- or high-grade lacerations involving 10% to 40% and 60% to 90% of the radioulnar width within Verdan Zone II, respectively. Static and dynamic ultrasonography using a linear array 14-MHz transducer was performed by a blinded musculoskeletal radiologist. Sensitivities, specificities, and other standard test performance metrics were calculated. Actual and measured percentages of tendon laceration were compared by the paired t test. After randomization, 24 tendons were lacerated (12 low- and 12 high-grade), whereas 8 remained intact. The sensitivity and specificity in detecting the presence versus absence of a partial laceration were 0.54 and 0.75, respectively, with positive and negative likelihood ratio values of 2.17 and 0.61. For low-grade lacerations, the sensitivity and specificity were 0.25 and 0.85, compared to 0.83 and 0.85 for high-grade lacerations. Ultrasonography underestimated the percentage of tendon involvement by a mean of 18.1% for the study population as a whole (95% confidence interval, 9.0% to 27.2%; P < .001) but accurately determined the extent for correctly diagnosed high-grade lacerations (-6.7%; 95% confidence interval, -18.7% to 5.2%; P = .22). Ultrasonography was useful in identifying and characterizing clinically relevant high-grade zone II partial flexor digitorum profundus lacerations in a cadaveric model. © 2017 by the American Institute of Ultrasound in Medicine.

  16. Development of a method to estimate organ doses for pediatric CT examinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papadakis, Antonios E., E-mail: apapadak@pagni.gr; Perisinakis, Kostas; Damilakis, John

    Purpose: To develop a method for estimating doses to primarily exposed organs in pediatric CT by taking into account patient size and automatic tube current modulation (ATCM). Methods: A Monte Carlo CT dosimetry software package, which creates patient-specific voxelized phantoms, accurately simulates CT exposures, and generates dose images depicting the energy imparted on the exposed volume, was used. Routine head, thorax, and abdomen/pelvis CT examinations in 92 pediatric patients, ranging from 1-month to 14-yr-old (49 boys and 43 girls), were simulated on a 64-slice CT scanner. Two sets of simulations were performed in each patient using (i) a fixed tubemore » current (FTC) value over the entire examination length and (ii) the ATCM profile extracted from the DICOM header of the reconstructed images. Normalized to CTDI{sub vol} organ dose was derived for all primary irradiated radiosensitive organs. Normalized dose data were correlated to patient’s water equivalent diameter using log-transformed linear regression analysis. Results: The maximum percent difference in normalized organ dose between FTC and ATCM acquisitions was 10% for eyes in head, 26% for thymus in thorax, and 76% for kidneys in abdomen/pelvis. In most of the organs, the correlation between dose and water equivalent diameter was significantly improved in ATCM compared to FTC acquisitions (P < 0.001). Conclusions: The proposed method employs size specific CTDI{sub vol}-normalized organ dose coefficients for ATCM-activated and FTC acquisitions in pediatric CT. These coefficients are substantially different between ATCM and FTC modes of operation and enable a more accurate assessment of patient-specific organ dose in the clinical setting.« less

  17. Quantitation of specific binding ratio in 123I-FP-CIT SPECT: accurate processing strategy for cerebral ventricular enlargement with use of 3D-striatal digital brain phantom.

    PubMed

    Furuta, Akihiro; Onishi, Hideo; Amijima, Hizuru

    2018-06-01

    This study aimed to evaluate the effect of ventricular enlargement on the specific binding ratio (SBR) and to validate the cerebrospinal fluid (CSF)-Mask algorithm for quantitative SBR assessment of 123 I-FP-CIT single-photon emission computed tomography (SPECT) images with the use of a 3D-striatum digital brain (SDB) phantom. Ventricular enlargement was simulated by three-dimensional extensions in a 3D-SDB phantom comprising segments representing the striatum, ventricle, brain parenchyma, and skull bone. The Evans Index (EI) was measured in 3D-SDB phantom images of an enlarged ventricle. Projection data sets were generated from the 3D-SDB phantoms with blurring, scatter, and attenuation. Images were reconstructed using the ordered subset expectation maximization (OSEM) algorithm and corrected for attenuation, scatter, and resolution recovery. We bundled DaTView (Southampton method) with the CSF-Mask processing software for SBR. We assessed SBR with the use of various coefficients (f factor) of the CSF-Mask. Specific binding ratios of 1, 2, 3, 4, and 5 corresponded to SDB phantom simulations with true values. Measured SBRs > 50% that were underestimated with EI increased compared with the true SBR and this trend was outstanding at low SBR. The CSF-Mask improved 20% underestimates and brought the measured SBR closer to the true values at an f factor of 1.0 despite an increase in EI. We connected the linear regression function (y = - 3.53x + 1.95; r = 0.95) with the EI and f factor using root-mean-square error. Processing with CSF-Mask generates accurate quantitative SBR from dopamine transporter SPECT images of patients with ventricular enlargement.

  18. Technical description of endoscopic ultrasonography with fine-needle aspiration for the staging of lung cancer.

    PubMed

    Kramer, Henk; van Putten, John W G; Douma, W Rob; Smidt, Alie A; van Dullemen, Hendrik M; Groen, Harry J M

    2005-02-01

    Endoscopic ultrasonography (EUS) is a novel method for staging of the mediastinum in lung cancer patients. The recent development of linear scanners enables safe and accurate fine-needle aspiration (FNA) of mediastinal and upper abdominal structures under real-time ultrasound guidance. However, various methods and equipment for mediastinal EUS-FNA are being used throughout the world, and a detailed description of the procedures is lacking. A thorough description of linear EUS-FNA is needed. A step-by-step description of the linear EUS-FNA procedure as performed in our hospital will be provided. Ultrasonographic landmarks will be shown on images. The procedure will be related to published literature, with a systematic literature search. EUS-FNA is an outpatient procedure under conscious sedation. The typical linear EUS-FNA procedure starts with examination of the retroperitoneal area. After this, systematic scanning of the mediastinum is performed at intervals of 1-2cm. Abnormalities are noted, and FNA of the abnormalities can be performed. Specimens are assessed for cellularity on-site. The entire procedure takes 45-60 min. EUS-FNA is minimally invasive, accurate, and fast. Anatomical areas can be reached that are inaccessible for cervical mediastinoscopy. EUS-FNA is useful for the staging of lung cancer or the assessment and diagnosis of abnormalities in the posterior mediastinum.

  19. Simulation of white light generation and near light bullets using a novel numerical technique

    NASA Astrophysics Data System (ADS)

    Zia, Haider

    2018-01-01

    An accurate and efficient simulation has been devised, employing a new numerical technique to simulate the derivative generalised non-linear Schrödinger equation in all three spatial dimensions and time. The simulation models all pertinent effects such as self-steepening and plasma for the non-linear propagation of ultrafast optical radiation in bulk material. Simulation results are compared to published experimental spectral data of an example ytterbium aluminum garnet system at 3.1 μm radiation and fits to within a factor of 5. The simulation shows that there is a stability point near the end of the 2 mm crystal where a quasi-light bullet (spatial temporal soliton) is present. Within this region, the pulse is collimated at a reduced diameter (factor of ∼2) and there exists a near temporal soliton at the spatial center. The temporal intensity within this stable region is compressed by a factor of ∼4 compared to the input. This study shows that the simulation highlights new physical phenomena based on the interplay of various linear, non-linear and plasma effects that go beyond the experiment and is thus integral to achieving accurate designs of white light generation systems for optical applications. An adaptive error reduction algorithm tailor made for this simulation will also be presented in appendix.

  20. A spectrally accurate boundary-layer code for infinite swept wings

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1994-01-01

    This report documents the development, validation, and application of a spectrally accurate boundary-layer code, WINGBL2, which has been designed specifically for use in stability analyses of swept-wing configurations. Currently, we consider only the quasi-three-dimensional case of an infinitely long wing of constant cross section. The effects of streamwise curvature, streamwise pressure gradient, and wall suction and/or blowing are taken into account in the governing equations and boundary conditions. The boundary-layer equations are formulated both for the attachment-line flow and for the evolving boundary layer. The boundary-layer equations are solved by marching in the direction perpendicular to the leading edge, for which high-order (up to fifth) backward differencing techniques are used. In the wall-normal direction, a spectral collocation method, based upon Chebyshev polynomial approximations, is exploited. The accuracy, efficiency, and user-friendliness of WINGBL2 make it well suited for applications to linear stability theory, parabolized stability equation methodology, direct numerical simulation, and large-eddy simulation. The method is validated against existing schemes for three test cases, including incompressible swept Hiemenz flow and Mach 2.4 flow over an airfoil swept at 70 deg to the free stream.

  1. Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.

    PubMed

    Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon

    2017-05-01

    Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.

  2. Data mining for the analysis of hippocampal zones in Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Ovando Vázquez, Cesaré M.

    2012-02-01

    In this work, a methodology to classify people with Alzheimer's Disease (AD), Healthy Controls (HC) and people with Mild Cognitive Impairment (MCI) is presented. This methodology consists of an ensemble of Support Vector Machines (SVM) with the hippocampal boxes (HB) as input data, these hippocampal zones are taken from Magnetic Resonance (MRI) and Positron Emission Tomography (PET) images. Two ways of constructing this ensemble are presented, the first consists of linear SVM models and the second of non-linear SVM models. Results demonstrate that the linear models classify HBs more accurately than the non-linear models between HC and MCI and that there are no differences between HC and AD.

  3. Compatible diagonal-norm staggered and upwind SBP operators

    NASA Astrophysics Data System (ADS)

    Mattsson, Ken; O'Reilly, Ossian

    2018-01-01

    The main motivation with the present study is to achieve a provably stable high-order accurate finite difference discretisation of linear first-order hyperbolic problems on a staggered grid. The use of a staggered grid makes it non-trivial to discretise advective terms. To overcome this difficulty we discretise the advective terms using upwind Summation-By-Parts (SBP) operators, while the remaining terms are discretised using staggered SBP operators. The upwind and staggered SBP operators (for each order of accuracy) are compatible, here meaning that they are based on the same diagonal norms, allowing for energy estimates to be formulated. The boundary conditions are imposed using a penalty (SAT) technique, to guarantee linear stability. The resulting SBP-SAT approximations lead to fully explicit ODE systems. The accuracy and stability properties are demonstrated for linear hyperbolic problems in 1D, and for the 2D linearised Euler equations with constant background flow. The newly derived upwind and staggered SBP operators lead to significantly more accurate numerical approximations, compared with the exclusive usage of (previously derived) central-difference first derivative SBP operators.

  4. Confinement properties of tokamak plasmas with extended regions of low magnetic shear

    NASA Astrophysics Data System (ADS)

    Graves, J. P.; Cooper, W. A.; Kleiner, A.; Raghunathan, M.; Neto, E.; Nicolas, T.; Lanthaler, S.; Patten, H.; Pfefferle, D.; Brunetti, D.; Lutjens, H.

    2017-10-01

    Extended regions of low magnetic shear can be advantageous to tokamak plasmas. But the core and edge can be susceptible to non-resonant ideal fluctuations due to the weakened restoring force associated with magnetic field line bending. This contribution shows how saturated non-linear phenomenology, such as 1 / 1 Long Lived Modes, and Edge Harmonic Oscillations associated with QH-modes, can be modelled accurately using the non-linear stability code XTOR, the free boundary 3D equilibrium code VMEC, and non-linear analytic theory. That the equilibrium approach is valid is particularly valuable because it enables advanced particle confinement studies to be undertaken in the ordinarily difficult environment of strongly 3D magnetic fields. The VENUS-LEVIS code exploits the Fourier description of the VMEC equilibrium fields, such that full Lorenzian and guiding centre approximated differential operators in curvilinear angular coordinates can be evaluated analytically. Consequently, the confinement properties of minority ions such as energetic particles and high Z impurities can be calculated accurately over slowing down timescales in experimentally relevant 3D plasmas.

  5. MRI-based, wireless determination of the transfer function of a linear implant: Introduction of the transfer matrix.

    PubMed

    Tokaya, Janot P; Raaijmakers, Alexander J E; Luijten, Peter R; van den Berg, Cornelis A T

    2018-04-24

    We introduce the transfer matrix (TM) that makes MR-based wireless determination of transfer functions (TFs) possible. TFs are implant specific measures for RF-safety assessment of linear implants. The TF relates an incident tangential electric field on an implant to a scattered electric field at its tip that generally governs local heating. The TM extends this concept and relates an incident tangential electric field to a current distribution in the implant therewith characterizing the RF response along the entire implant. The TM is exploited to measure TFs with MRI without hardware alterations. A model of rightward and leftward propagating attenuated waves undergoing multiple reflections is used to derive an analytical expression for the TM. This allows parameterization of the TM of generic implants, e.g., (partially) insulated single wires, in a homogeneous medium in a few unknowns that simultaneously describe the TF. These unknowns can be determined with MRI making it possible to measure the TM and, therefore, also the TF. The TM is able to predict an induced current due to an incident electric field and can be accurately parameterized with a limited number of unknowns. Using this description the TF is determined accurately (with a Pearson correlation coefficient R ≥ 0.9 between measurements and simulations) from MRI acquisitions. The TM enables measuring of TFs with MRI of the tested generic implant models. The MR-based method does not need hardware alterations and is wireless hence making TF determination in more realistic scenarios conceivable. © 2018 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  6. Player's success prediction in rugby union: From youth performance to senior level placing.

    PubMed

    Fontana, Federico Y; Colosio, Alessandro L; Da Lozzo, Giorgio; Pogliaghi, Silvia

    2017-04-01

    The study questioned if and to what extent specific anthropometric and functional characteristics measured in youth draft camps, can accurately predict subsequent career progression in rugby union. Original research. Anthropometric and functional characteristics of 531 male players (U16) were retrospectively analysed in relation to senior level team representation at age 21-24. Players were classified as International (Int: National team and international clubs) or National (Nat: 1st, 2nd and other divisions and dropout). Multivariate analysis of variance (one-way MANOVA) tested differences between Int and Nat, along a combination of anthropometric (body mass, height, body fat, fat-free mass) and functional variables (SJ, CMJ, t 15m , t 30m , VO 2max ). A discriminant function (DF) was determined to predict group assignment based on the linear combination of variables that best discriminate groups. Correct level assignment was expressed as % hit rate. A combination of anthropometric and functional characteristics reflects future level assignment (Int vs. Nat). Players' success can be accurately predicted (hit rate=81% and 77% for Int and Nat respectively) by a DF that combines anthropometric and functional variables as measured at ∼15 years of age, percent body fat and speed being the most influential predictors of group stratification. Within a group of 15 year-olds with exceptional physical characteristics, future players' success can be predicted using a linear combination of anthropometric and functional variables, among which a lower percent body fat and higher speed over a 15m sprint provide the most important predictors of the highest career success. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  7. On the sensitivity of teleseismic full-waveform inversion to earth parametrization, initial model and acquisition design

    NASA Astrophysics Data System (ADS)

    Beller, S.; Monteiller, V.; Combe, L.; Operto, S.; Nolet, G.

    2018-02-01

    Full-waveform inversion (FWI) is not yet a mature imaging technology for lithospheric imaging from teleseismic data. Therefore, its promise and pitfalls need to be assessed more accurately according to the specifications of teleseismic experiments. Three important issues are related to (1) the choice of the lithospheric parametrization for optimization and visualization, (2) the initial model and (3) the acquisition design, in particular in terms of receiver spread and sampling. These three issues are investigated with a realistic synthetic example inspired by the CIFALPS experiment in the Western Alps. Isotropic elastic FWI is implemented with an adjoint-state formalism and aims to update three parameter classes by minimization of a classical least-squares difference-based misfit function. Three different subsurface parametrizations, combining density (ρ) with P and S wave speeds (Vp and Vs) , P and S impedances (Ip and Is), or elastic moduli (λ and μ) are first discussed based on their radiation patterns before their assessment by FWI. We conclude that the (ρ, λ, μ) parametrization provides the FWI models that best correlate with the true ones after recombining a posteriori the (ρ, λ, μ) optimization parameters into Ip and Is. Owing to the low frequency content of teleseismic data, 1-D reference global models as PREM provide sufficiently accurate initial models for FWI after smoothing that is necessary to remove the imprint of the layering. Two kinds of station deployments are assessed: coarse areal geometry versus dense linear one. We unambiguously conclude that a coarse areal geometry should be favoured as it dramatically increases the penetration in depth of the imaging as well as the horizontal resolution. This results because the areal geometry significantly increases local wavenumber coverage, through a broader sampling of the scattering and dip angles, compared to a linear deployment.

  8. Innovation indices: the need for positioning them where they properly belong.

    PubMed

    Kozłowski, Jan

    A specific quality of the discussion about innovation indices (scoreboards) is that more often than not the subject is dealt with from a purely technical point of view. Such a narrow approach silently assumes that indices used as a policy tool are an accurate reflection of the phenomenon and should not be questioned, and also that the whole discussion concerning them should refer to methodological aspects and is best left to the statisticians. This author is of the opinion that for an accurate evaluation of the value of indices as a policy tool, it is necessary to consider the matter from the broader point of view and from the context in which such indices are generated and used. This article puts forward the thesis that progress in science and innovation policy studies depends on a diversity of issues, approaches and perspectives. If that is the case, maintaining thematic and methodological variety may be more important than creating coherent and closed analytical tools, i.e. indices. The advantage of indices is that they focus attention on those variables which are deemed to be key. Among their disadvantages, however, are their highly abstract nature (in order to understand innovation-related phenomena, it is necessary to study them in tangible, composite forms); their tendency to skip unmeasurable determinants; their prior acceptance of definitions and concepts of innovation (instead of searching for them); the way they apply a single yardstick to diverse countries and regions, assumed linearity and causality in a complex and non-linear world, the way they direct policy towards implementing indicators (rather than identifying and solving problems). It is suggested that big data revolution will allow the emergence of a new measurement tools that will replace innovation indices.

  9. Multi-material decomposition of spectral CT images

    NASA Astrophysics Data System (ADS)

    Mendonça, Paulo R. S.; Bhotika, Rahul; Maddah, Mahnaz; Thomsen, Brian; Dutta, Sandeep; Licato, Paul E.; Joshi, Mukta C.

    2010-04-01

    Spectral Computed Tomography (Spectral CT), and in particular fast kVp switching dual-energy computed tomography, is an imaging modality that extends the capabilities of conventional computed tomography (CT). Spectral CT enables the estimation of the full linear attenuation curve of the imaged subject at each voxel in the CT volume, instead of a scalar image in Hounsfield units. Because the space of linear attenuation curves in the energy ranges of medical applications can be accurately described through a two-dimensional manifold, this decomposition procedure would be, in principle, limited to two materials. This paper describes an algorithm that overcomes this limitation, allowing for the estimation of N-tuples of material-decomposed images. The algorithm works by assuming that the mixing of substances and tissue types in the human body has the physicochemical properties of an ideal solution, which yields a model for the density of the imaged material mix. Under this model the mass attenuation curve of each voxel in the image can be estimated, immediately resulting in a material-decomposed image triplet. Decomposition into an arbitrary number of pre-selected materials can be achieved by automatically selecting adequate triplets from an application-specific material library. The decomposition is expressed in terms of the volume fractions of each constituent material in the mix; this provides for a straightforward, physically meaningful interpretation of the data. One important application of this technique is in the digital removal of contrast agent from a dual-energy exam, producing a virtual nonenhanced image, as well as in the quantification of the concentration of contrast observed in a targeted region, thus providing an accurate measure of tissue perfusion.

  10. Stoichiometric determination of moisture in edible oils by Mid-FTIR spectroscopy.

    PubMed

    van de Voort, F R; Tavassoli-Kafrani, M H; Curtis, J M

    2016-04-28

    A simple and accurate method for the determination of moisture in edible oils by differential FTIR spectroscopy has been devised based on the stoichiometric reaction of the moisture in oil with toluenesulfonyl isocyanate (TSI) to produce CO2. Calibration standards were devised by gravimetrically spiking dry dioxane with water, followed by the addition of neat TSI and examination of the differential spectra relative to the dry dioxane. In the method, CO2 peak area changes are measured at 2335 cm(-1) and were shown to be related to the amount of moisture added, with any CO2 inherent to residual moisture in the dry dioxane subtracted ratioed out. CO2 volatility issues were determined to be minimal, with the overall SD of dioxane calibrations being ∼18 ppm over a range of 0-1000 ppm. Gravimetrically blended dry and water-saturated oils analysed in a similar manner produced linear CO2 responses with SD's of <15 ppm on average. One set of dry-wet blends was analysed in duplicate by FTIR and by two independent laboratories using coulometric Karl Fischer (KF) procedures. All 3 methods produced highly linear moisture relationships with SD's of 7, 16 and 28 ppm, respectively over a range of 200-1500 ppm. Although the absolute moisture values obtained by each method did not exactly coincide, each tracked the expected moisture changes proportionately. The FTIRTSI-H2O method provides a simple and accurate instrumental means of determining moisture in oils rivaling the accuracy and specificity of standard KF procedures and has the potential to be automated. It could also be applied to other hydrophobic matrices and possibly evolve into a more generalized method, if combined with polar aprotic solvent extraction. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Development and Utility of a Four-Channel Scanner for Wildland Fire Research and Applications

    NASA Technical Reports Server (NTRS)

    Ambrosia, Vincent G.; Brass, James A.; Higgins, Robert G.; Hildum, Edward; Peterson, David L. (Technical Monitor)

    1996-01-01

    The Airborne Infrared Disaster Assessment System (AIRDAS) is a four-channel scanner designed and built at NASA-Ames for the specific task of supporting research and applications on fire impacts on terrestrial and atmospheric processes and also of serving as a vital instrument in the assessment of natural and man-induced disasters. The system has been flown on numerous airframes including the Navajo, King-Air, C0130, and Lear Jet 310 and a 206. The system includes a configuration composed of a 386 PC computer workstation, a non-linear detector amplifier, a sixteen-bit digitizer, dichroic filters, and Exabyte 8500 5Gb Tape output, VHS tape output, a Rockwell GPS and a 2-axis gyro. The AIRDAS system collects digital data in four wavelength regions, which can be filtered: band 1 (0.61-0.68 microns), band 2 (1.57-1.7 microns), band 3 (3.6-5.5 microns), and band 4 (5.5-13.0 microns), an FOV of 108 degrees, an IFOV of 2.62 mrads, and a digitized swath width of 720 pixels. The inclusion of the non-linear detector amplifier allows for the accurate measurement of emitted temperature from fires and hot spots. Lab testing of the scanner has indicated temperature assessments of 800 C without detector saturation. This has advantages over previous systems which were designed for thermal measurement of earth background temperatures, and were ill-equipped for accurate determination of high intensity conditions. The scanner has been flown successfully on data collection missions since 1992 in the western US as well as Brazil. These and other research and applications responses will be presented along with an assessment of future directions with the system.a

  12. Comparison of two-concentration with multi-concentration linear regressions: Retrospective data analysis of multiple regulated LC-MS bioanalytical projects.

    PubMed

    Musuku, Adrien; Tan, Aimin; Awaiye, Kayode; Trabelsi, Fethi

    2013-09-01

    Linear calibration is usually performed using eight to ten calibration concentration levels in regulated LC-MS bioanalysis because a minimum of six are specified in regulatory guidelines. However, we have previously reported that two-concentration linear calibration is as reliable as or even better than using multiple concentrations. The purpose of this research is to compare two-concentration with multiple-concentration linear calibration through retrospective data analysis of multiple bioanalytical projects that were conducted in an independent regulated bioanalytical laboratory. A total of 12 bioanalytical projects were randomly selected: two validations and two studies for each of the three most commonly used types of sample extraction methods (protein precipitation, liquid-liquid extraction, solid-phase extraction). When the existing data were retrospectively linearly regressed using only the lowest and the highest concentration levels, no extra batch failure/QC rejection was observed and the differences in accuracy and precision between the original multi-concentration regression and the new two-concentration linear regression are negligible. Specifically, the differences in overall mean apparent bias (square root of mean individual bias squares) are within the ranges of -0.3% to 0.7% and 0.1-0.7% for the validations and studies, respectively. The differences in mean QC concentrations are within the ranges of -0.6% to 1.8% and -0.8% to 2.5% for the validations and studies, respectively. The differences in %CV are within the ranges of -0.7% to 0.9% and -0.3% to 0.6% for the validations and studies, respectively. The average differences in study sample concentrations are within the range of -0.8% to 2.3%. With two-concentration linear regression, an average of 13% of time and cost could have been saved for each batch together with 53% of saving in the lead-in for each project (the preparation of working standard solutions, spiking, and aliquoting). Furthermore, examples are given as how to evaluate the linearity over the entire concentration range when only two concentration levels are used for linear regression. To conclude, two-concentration linear regression is accurate and robust enough for routine use in regulated LC-MS bioanalysis and it significantly saves time and cost as well. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. High Performance Liquid Chromatography-Diode Array Detector Method for the Simultaneous Determination of Five Compounds in the Pulp and Seed of Sea Buckthorn.

    PubMed

    Zhao, Lu; Wen, E; Upur, Halmuart; Tian, Shuge

    2017-01-01

    Sea buckthorn ( Hippophae rhamnoides L.) as a traditional Chinese medicinal plant has various uses in Xinjiang. A reversed-phase rapid-resolution liquid-chromatography method with diode array detector was developed for simultaneous determination of protocatechuic acid, rutin, quercetin, kaempferol, and isorhamnetin in the pulp and seed of sea buckthorn, a widely used traditional Chinese medicine for promoting metabolism and treating scurvy and other diseases. Compounds were separated on an Agilent ZORBAX SB-C18 column (4.6 mm × 250 mm, 5 μm; USA) with gradient elution using methanol and 0.4% phosphoric acid (v/v) at 1.0 mL/min. Detection wavelength was set at 280 nm. The fruits of wild sea buckthorn were collected from Wushi County in Aksu, Xinjiang Province. The RSD of precision test of the five compounds were in the range of 0.60-2.22%, and the average recoveries ranged from 97.36% to 101.19%. Good linearity between specific chromatographic peak and component qualities were observed in the investigated ranges for all the analytes ( R 2 > 0.9997). The proposed method was successfully applied to determine the levels of five active components in sea buckthorn samples from Aksu in Xinjiang. The proposed method is simple, fast, sensitive, accurate, and suitable for quantitative assessment of the pulp and seed of sea buckthorn. Quantitative analysis method of protocatechuic acid, rutin, quercetin, kaempferol, and isorhamnetin in the extract of sea buckthorn pulp and seed is developed by high-performance liquid chromatography (HPLC) diode array detection.This method is simple and accurate; has strong specificity, good precision, and high recovery rate; and provides a reliable basis for further development of the substances in the pulp and seed of sea buckthorn.The method is widely used for content determination of active ingredients or physiologically active components in traditional Chinese medicine and its preparation Abbreviation used: PR: protocatechuic acid, RU: rutin, QU: quercetin, KA: kaempferol, IS: isorhamnetin, HPLC: high-performance liquid chromatography, HPLC-DAD: high performance liquid chromatographydiode array detector, LOD: linearity and limit of detection, LOQ: limit of quantitation, RSD: relative standard deviation.

  14. Investigation of phase distribution using Phame® in-die phase measurements

    NASA Astrophysics Data System (ADS)

    Buttgereit, Ute; Perlitz, Sascha

    2009-03-01

    As lithography mask processes move toward 45nm and 32nm node, mask complexity increases steadily, mask specifications tighten and process control becomes extremely important. Driven by this fact the requirements for metrology tools increase as well. Efforts in metrology have been focused on accurately measuring CD linearity and uniformity across the mask, and accurately measuring phase variation on Alternating/Attenuated PSM and transmission for Attenuated PSM. CD control on photo masks is usually done through the following processes: exposure dose/focus change, resist develop and dry etch. The key requirement is to maintain correct CD linearity and uniformity across the mask. For PSM specifically, the effect of CD uniformity for both Alternating PSM and Attenuated PSM and etch depth for Alternating PSM becomes also important. So far phase measurement has been limited to either measuring large-feature phase using interferometer-based metrology tools or measuring etch depth using AFM and converting etch depth into phase under the assumption that trench profile and optical properties of the layers remain constant. However recent investigations show that the trench profile and optical property of layers impact the phase. This effect is getting larger for smaller CD's. The currently used phase measurement methods run into limitations because they are not able to capture 3D mask effects, diffraction limitations or polarization effects. The new phase metrology system - Phame(R) developed by Carl Zeiss SMS overcomes those limitations and enables laterally resolved phase measurement in any kind of production feature on the mask. The resolution of the system goes down to 120nm half pitch at mask level. We will report on tool performance data with respect to static and dynamic phase repeatability focusing on Alternating PSM. Furthermore the phase metrology system was used to investigate mask process signatures on Alternating PSM in order to further improve the overall PSM process performance. Especially global loading effects caused by the pattern density and micro loading effects caused by the feature size itself have been evaluated using the capability of measuring phase in the small production features. The results of this study will be reported in this paper.

  15. MO-AB-BRA-10: Cancer Therapy Outcome Prediction Based On Dempster-Shafer Theory and PET Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, C; University of Rouen, QuantIF - EA 4108 LITIS, 76000 Rouen; Li, H

    2015-06-15

    Purpose: In cancer therapy, utilizing FDG-18 PET image-based features for accurate outcome prediction is challenging because of 1) limited discriminative information within a small number of PET image sets, and 2) fluctuant feature characteristics caused by the inferior spatial resolution and system noise of PET imaging. In this study, we proposed a new Dempster-Shafer theory (DST) based approach, evidential low-dimensional transformation with feature selection (ELT-FS), to accurately predict cancer therapy outcome with both PET imaging features and clinical characteristics. Methods: First, a specific loss function with sparse penalty was developed to learn an adaptive low-rank distance metric for representing themore » dissimilarity between different patients’ feature vectors. By minimizing this loss function, a linear low-dimensional transformation of input features was achieved. Also, imprecise features were excluded simultaneously by applying a l2,1-norm regularization of the learnt dissimilarity metric in the loss function. Finally, the learnt dissimilarity metric was applied in an evidential K-nearest-neighbor (EK- NN) classifier to predict treatment outcome. Results: Twenty-five patients with stage II–III non-small-cell lung cancer and thirty-six patients with esophageal squamous cell carcinomas treated with chemo-radiotherapy were collected. For the two groups of patients, 52 and 29 features, respectively, were utilized. The leave-one-out cross-validation (LOOCV) protocol was used for evaluation. Compared to three existing linear transformation methods (PCA, LDA, NCA), the proposed ELT-FS leads to higher prediction accuracy for the training and testing sets both for lung-cancer patients (100+/−0.0, 88.0+/−33.17) and for esophageal-cancer patients (97.46+/−1.64, 83.33+/−37.8). The ELT-FS also provides superior class separation in both test data sets. Conclusion: A novel DST- based approach has been proposed to predict cancer treatment outcome using PET image features and clinical characteristics. A specific loss function has been designed for robust accommodation of feature set incertitude and imprecision, facilitating adaptive learning of the dissimilarity metric for the EK-NN classifier.« less

  16. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry I.; Kasimov, Aslan R.

    2018-03-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  17. Development and validation of sensitive LC/MS/MS method for quantitative bioanalysis of levonorgestrel in rat plasma and application to pharmacokinetics study.

    PubMed

    Ananthula, Suryatheja; Janagam, Dileep R; Jamalapuram, Seshulatha; Johnson, James R; Mandrell, Timothy D; Lowe, Tao L

    2015-10-15

    Rapid, sensitive, selective and accurate LC/MS/MS method was developed for quantitative determination of levonorgestrel (LNG) in rat plasma and further validated for specificity, linearity, accuracy, precision, sensitivity, matrix effect, recovery efficiency and stability. Liquid-liquid extraction procedure using hexane:ethyl acetate mixture at 80:20 v:v ratio was employed to efficiently extract LNG from rat plasma. Reversed phase Luna column C18(2) (50×2.0mm i.d., 3μM) installed on a AB SCIEX Triple Quad™ 4500 LC/MS/MS system was used to perform chromatographic separation. LNG was identified within 2min with high specificity. Linear calibration curve was drawn within 0.5-50ng·mL(-1) concentration range. The developed method was validated for intra-day and inter-day accuracy and precision whose values fell in the acceptable limits. Matrix effect was found to be minimal. Recovery efficiency at three quality control (QC) concentrations 0.5 (low), 5 (medium) and 50 (high) ng·mL(-1) was found to be >90%. Stability of LNG at various stages of experiment including storage, extraction and analysis was evaluated using QC samples, and the results showed that LNG was stable at all the conditions. This validated method was successfully used to study the pharmacokinetics of LNG in rats after SubQ injection, providing its applicability in relevant preclinical studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. What is the Best Model Specification and Earth Observation Product for Predicting Regional Grain Yields in Food Insecure Countries?

    NASA Astrophysics Data System (ADS)

    Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.

    2017-12-01

    We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.

  19. HPTLC and Spectrophotometric Estimation of Febuxostat and Diclofenac Potassium in Their Combined Tablets.

    PubMed

    El-Yazbi, Fawzi A; Amin, Omayma A; El-Kimary, Eman I; Khamis, Essam F; Younis, Sameh E

    2016-08-01

    An accurate, precise, rapid, specific and economic high-performance thin-layer chromatographic (HPTLC) method has been developed for the simultaneous quantitative determination of febuxostat (FEB) and diclofenac potassium (DIC). The chromatographic separation was performed on precoated silica gel 60 GF254 plates with chloroform-methanol 7:3 (v/v) as the mobile phase. The developed plates were scanned and quantified at 289 nm. Experimental conditions including band size, mobile phase composition and chamber-saturation time were critically studied, and the optimum conditions were selected. A satisfactory resolution (Rs = 2.67) with RF 0.48 and 0.69 and high sensitivity with limits of detection of 4 and 7 ng/band for FEB and DIC, respectively, were obtained. In addition, derivative ratio and ratio difference spectrophotometric methods were established for the analysis of such a mixture. All methods were validated as per the ICH guidelines. In the HPTLC method, the calibration plots were linear between 0.01-0.55 and 0.02-0.60 µg/band, for FEB and DIC, respectively. For the spectrophotometric methods, the calibration graphs were linear between 2-14 and 4-18 µg/mL for FEB and DIC, respectively. The simplicity and specificity of the proposed methods suggest their application in quality control analysis of FEB and DIC in their raw materials and tablets. A comparison of the proposed methods with the existing methods is presented. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Reduction of astrometric plates

    NASA Technical Reports Server (NTRS)

    Stock, J.

    1984-01-01

    A rapid and accurate method for the reduction of comet or asteroid plates is described. Projection equations, scale length correction, rotation of coordinates, linearization, the search for additional reference stars, and the final solution are examined.

  1. HPTLC Determination of Artemisinin and Its Derivatives in Bulk and Pharmaceutical Dosage

    NASA Astrophysics Data System (ADS)

    Agarwal, Suraj P.; Ahuja, Shipra

    A simple, selective, accurate, and precise high-performance thin-layer chromatographic (HPTLC) method has been established and validated for the analysis of artemisinin and its derivatives (artesunate, artemether, and arteether) in the bulk drugs and formulations. The artemisinin, artesunate, artemether, and arteether were separated on aluminum-backed silica gel 60 F254 plates with toluene:ethyl acetate (10:1), toluene: ethyl acetate: acetic acid (2:8:0.2), toluene:butanol (10:1), and toluene:dichloro methane (0.5:10) mobile phase, respectively. The linear detector response for concentrations between 100 and 600 ng/spot showed good linear relationship with r value 0.9967, 0.9989, 0.9981 and 0.9989 for artemisinin, artesunate, artemether, and arteether, respectively. Statistical analysis proves that the method is precise, accurate, and reproducible and hence can be employed for the routine analysis.

  2. Improving Photometric Calibration of Meteor Video Camera Systems.

    PubMed

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  3. A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium

    NASA Astrophysics Data System (ADS)

    Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand

    2014-05-01

    The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.

  4. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  5. Perfusion Neuroimaging Abnormalities Alone Distinguish National Football League Players from a Healthy Population.

    PubMed

    Amen, Daniel G; Willeumier, Kristen; Omalu, Bennet; Newberg, Andrew; Raghavendra, Cauligi; Raji, Cyrus A

    2016-04-25

    National Football League (NFL) players are exposed to multiple head collisions during their careers. Increasing awareness of the adverse long-term effects of repetitive head trauma has raised substantial concern among players, medical professionals, and the general public. To determine whether low perfusion in specific brain regions on neuroimaging can accurately separate professional football players from healthy controls. A cohort of retired and current NFL players (n = 161) were recruited in a longitudinal study starting in 2009 with ongoing interval follow up. A healthy control group (n = 124) was separately recruited for comparison. Assessments included medical examinations, neuropsychological tests, and perfusion neuroimaging with single photon emission computed tomography (SPECT). Perfusion estimates of each scan were quantified using a standard atlas. We hypothesized that hypoperfusion particularly in the orbital frontal, anterior cingulate, anterior temporal, hippocampal, amygdala, insular, caudate, superior/mid occipital, and cerebellar sub-regions alone would reliably separate controls from NFL players. Cerebral perfusion differences were calculated using a one-way ANOVA and diagnostic separation was determined with discriminant and automatic linear regression predictive models. NFL players showed lower cerebral perfusion on average (p < 0.01) in 36 brain regions. The discriminant analysis subsequently distinguished NFL players from controls with 90% sensitivity, 86% specificity, and 94% accuracy (95% CI 95-99). Automatic linear modeling achieved similar results. Inclusion of age and clinical co-morbidities did not improve diagnostic classification. Specific brain regions commonly damaged in traumatic brain injury show abnormally low perfusion on SPECT in professional NFL players. These same regions alone can distinguish this group from healthy subjects with high diagnostic accuracy. This study carries implications for the neurological safety of NFL players.

  6. Perfusion Neuroimaging Abnormalities Alone Distinguish National Football League Players from a Healthy Population

    PubMed Central

    Amen, Daniel G.; Willeumier, Kristen; Omalu, Bennet; Newberg, Andrew; Raghavendra, Cauligi; Raji, Cyrus A.

    2016-01-01

    Background: National Football League (NFL) players are exposed to multiple head collisions during their careers. Increasing awareness of the adverse long-term effects of repetitive head trauma has raised substantial concern among players, medical professionals, and the general public. Objective: To determine whether low perfusion in specific brain regions on neuroimaging can accurately separate professional football players from healthy controls. Method: A cohort of retired and current NFL players (n = 161) were recruited in a longitudinal study starting in 2009 with ongoing interval follow up. A healthy control group (n = 124) was separately recruited for comparison. Assessments included medical examinations, neuropsychological tests, and perfusion neuroimaging with single photon emission computed tomography (SPECT). Perfusion estimates of each scan were quantified using a standard atlas. We hypothesized that hypoperfusion particularly in the orbital frontal, anterior cingulate, anterior temporal, hippocampal, amygdala, insular, caudate, superior/mid occipital, and cerebellar sub-regions alone would reliably separate controls from NFL players. Cerebral perfusion differences were calculated using a one-way ANOVA and diagnostic separation was determined with discriminant and automatic linear regression predictive models. Results: NFL players showed lower cerebral perfusion on average (p < 0.01) in 36 brain regions. The discriminant analysis subsequently distinguished NFL players from controls with 90% sensitivity, 86% specificity, and 94% accuracy (95% CI 95-99). Automatic linear modeling achieved similar results. Inclusion of age and clinical co-morbidities did not improve diagnostic classification. Conclusion: Specific brain regions commonly damaged in traumatic brain injury show abnormally low perfusion on SPECT in professional NFL players. These same regions alone can distinguish this group from healthy subjects with high diagnostic accuracy. This study carries implications for the neurological safety of NFL players. PMID:27128374

  7. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  8. A geometric nonlinear degenerated shell element using a mixed formulation with independently assumed strain fields. Final Report; Ph.D. Thesis, 1989

    NASA Technical Reports Server (NTRS)

    Graf, Wiley E.

    1991-01-01

    A mixed formulation is chosen to overcome deficiencies of the standard displacement-based shell model. Element development is traced from the incremental variational principle on through to the final set of equilibrium equations. Particular attention is paid to developing specific guidelines for selecting the optimal set of strain parameters. A discussion of constraint index concepts and their predictive capability related to locking is included. Performance characteristics of the elements are assessed in a wide variety of linear and nonlinear plate/shell problems. Despite limiting the study to geometric nonlinear analysis, a substantial amount of additional insight concerning the finite element modeling of thin plate/shell structures is provided. For example, in nonlinear analysis, given the same mesh and load step size, mixed elements converge in fewer iterations than equivalent displacement-based models. It is also demonstrated that, in mixed formulations, lower order elements are preferred. Additionally, meshes used to obtain accurate linear solutions do not necessarily converge to the correct nonlinear solution. Finally, a new form of locking was identified associated with employing elements designed for biaxial bending in uniaxial bending applications.

  9. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE PAGES

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    2016-05-25

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  10. Instantaneous phase mapping deflectometry for dynamic deformable mirror characterization

    NASA Astrophysics Data System (ADS)

    Trumper, Isaac; Choi, Heejoo

    2017-09-01

    We present an instantaneous phase mapping deflectometry (PMD) system in the context of measuring a continuous surface deformable mirror (DM). Deflectometry has a high dynamic range, enabling the full range of surfaces generated by the DM to be measured. The recent development of an instantaneous PMD system leverages the simple setup of the PMD system to measure dynamic objects with accuracy similar to an interferometer. To demonstrate the capabilities of this technology, we perform a linearity measurement of the actuator motion in a continuous surface DM, which is critical for closed loop control in adaptive optics applications. We measure the entire set of actuators across the DM as they traverse their full range of motion with a Shack-Hartman wavefront sensor, thereby obtaining the influence function. Given the influence function of each actuator, the DM can produce specific Zernike terms on its surface. We then measure the linearity of the Zernike modes available in the DM software using the instantaneous PMD system. By obtaining the relationship between modes, we can more accurately generate surface profiles composed of Zernike terms. This ability is useful for other dynamic freeform metrology applications that utilize the DM as a null component.

  11. Cosmic bubble and domain wall instabilities II: fracturing of colliding walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braden, Jonathan; Bond, J. Richard; Mersini-Houghton, Laura, E-mail: j.braden@ucl.ac.uk, E-mail: bond@cita.utoronto.ca, E-mail: mersini@physics.unc.edu

    2015-08-01

    We study collisions between nearly planar domain walls including the effects of small initial nonplanar fluctuations. These perturbations represent the small fluctuations that must exist in a quantum treatment of the problem. In a previous paper, we demonstrated that at the linear level a subset of these fluctuations experience parametric amplification as a result of their coupling to the planar symmetric background. Here we study the full three-dimensional nonlinear dynamics using lattice simulations, including both the early time regime when the fluctuations are well described by linear perturbation theory as well as the subsequent stage of fully nonlinear evolution. Wemore » find that the nonplanar fluctuations have a dramatic effect on the overall evolution of the system. Specifically, once these fluctuations begin to interact nonlinearly the split into a planar symmetric part of the field and the nonplanar fluctuations loses its utility. At this point the colliding domain walls dissolve, with the endpoint of this being the creation of a population of oscillons in the collision region. The original (nearly) planar symmetry has been completely destroyed at this point and an accurate study of the system requires the full three-dimensional simulation.« less

  12. Cosmic bubble and domain wall instabilities II: fracturing of colliding walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braden, Jonathan; Department of Physics, University of Toronto,60 St. George Street, Toronto, ON, M5S 3H8; Department of Physics and Astronomy, University College London,Gower Street, London, WC1E 6BT

    2015-08-26

    We study collisions between nearly planar domain walls including the effects of small initial nonplanar fluctuations. These perturbations represent the small fluctuations that must exist in a quantum treatment of the problem. In a previous paper, we demonstrated that at the linear level a subset of these fluctuations experience parametric amplification as a result of their coupling to the planar symmetric background. Here we study the full three-dimensional nonlinear dynamics using lattice simulations, including both the early time regime when the fluctuations are well described by linear perturbation theory as well as the subsequent stage of fully nonlinear evolution. Wemore » find that the nonplanar fluctuations have a dramatic effect on the overall evolution of the system. Specifically, once these fluctuations begin to interact nonlinearly the split into a planar symmetric part of the field and the nonplanar fluctuations loses its utility. At this point the colliding domain walls dissolve, with the endpoint of this being the creation of a population of oscillons in the collision region. The original (nearly) planar symmetry has been completely destroyed at this point and an accurate study of the system requires the full three-dimensional simulation.« less

  13. Development and validation of a HPTLC method for simultaneous estimation of lornoxicam and thiocolchicoside in combined dosage form

    PubMed Central

    Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A.; Raut, Rahul P.; Choudhari, Vishnu P.; Kuchekar, Bhanudas S.

    2011-01-01

    Aim: To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. Materials and Methods: The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. Results: The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60–360 ng/band for LOR and 30–180 ng/band for THIO with correlation coefficients r2 = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7–101.2 %. Conclusion: The proposed method was optimized and validated as per the ICH guidelines. PMID:23781452

  14. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (n = 88) and brain and visual cortex (n = 99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes and (iii) different visual cortical areas, independently of overall brain volume. In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices.

  15. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed Central

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    Background In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. Aim To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Subjects & Methods Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (N=88), and brain and visual cortex (N=99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. Results A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes, (iii) different visual cortical areas, independently of overall brain volume. Conclusion In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices. PMID:23879766

  16. Reduction of chemical formulas from the isotopic peak distributions of high-resolution mass spectra.

    PubMed

    Roussis, Stilianos G; Proulx, Richard

    2003-03-15

    A method has been developed for the reduction of the chemical formulas of compounds in complex mixtures from the isotopic peak distributions of high-resolution mass spectra. The method is based on the principle that the observed isotopic peak distribution of a mixture of compounds is a linear combination of the isotopic peak distributions of the individual compounds in the mixture. All possible chemical formulas that meet specific criteria (e.g., type and number of atoms in structure, limits of unsaturation, etc.) are enumerated, and theoretical isotopic peak distributions are generated for each formula. The relative amount of each formula is obtained from the accurately measured isotopic peak distribution and the calculated isotopic peak distributions of all candidate formulas. The formulas of compounds in simple spectra, where peak components are fully resolved, are rapidly determined by direct comparison of the calculated and experimental isotopic peak distributions. The singular value decomposition linear algebra method is used to determine the contributions of compounds in complex spectra containing unresolved peak components. The principles of the approach and typical application examples are presented. The method is most useful for the characterization of complex spectra containing partially resolved peaks and structures with multiisotopic elements.

  17. A Unified Point Process Probabilistic Framework to Assess Heartbeat Dynamics and Autonomic Cardiovascular Control

    PubMed Central

    Chen, Zhe; Purdon, Patrick L.; Brown, Emery N.; Barbieri, Riccardo

    2012-01-01

    In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model’s statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR) structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR), heart rate variability (HRV), respiratory sinus arrhythmia (RSA), and baroreceptor-cardiac reflex (baroreflex) sensitivity (BRS), are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second-order non-linearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of non-linearity. We here present a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, non-invasive assessment in clinical practice. We also discuss the limitations and other alternative modeling strategies of our point process approach. PMID:22375120

  18. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  19. 3D TOCSY-HSQC NMR for metabolic flux analysis using non-uniform sampling

    DOE PAGES

    Reardon, Patrick N.; Marean-Reardon, Carrie L.; Bukovec, Melanie A.; ...

    2016-02-05

    13C-Metabolic Flux Analysis ( 13C-MFA) is rapidly being recognized as the authoritative method for determining fluxes through metabolic networks. Site-specific 13C enrichment information obtained using NMR spectroscopy is a valuable input for 13C-MFA experiments. Chemical shift overlaps in the 1D or 2D NMR experiments typically used for 13C-MFA frequently hinder assignment and quantitation of site-specific 13C enrichment. Here we propose the use of a 3D TOCSY-HSQC experiment for 13C-MFA. We employ Non-Uniform Sampling (NUS) to reduce the acquisition time of the experiment to a few hours, making it practical for use in 13C-MFA experiments. Our data show that the NUSmore » experiment is linear and quantitative. Identification of metabolites in complex mixtures, such as a biomass hydrolysate, is simplified by virtue of the 13C chemical shift obtained in the experiment. In addition, the experiment reports 13C-labeling information that reveals the position specific labeling of subsets of isotopomers. As a result, the information provided by this technique will enable more accurate estimation of metabolic fluxes in larger metabolic networks.« less

  20. Quantifying in situ growth rate of a filamentous bacterial species in activated sludge using rRNA:rDNA ratio.

    PubMed

    Nguyen, Vivi L; He, Xia; de Los Reyes, Francis L

    2016-11-01

    If the in situ growth rate of filamentous bacteria in activated sludge can be quantified, researchers can more accurately assess the effect of operating conditions on the growth of filaments and improve the mathematical modeling of filamentous bulking. We developed a method to quantify the in situ specific growth rate of Sphaerotilus natans (a model filament) in activated sludge using the species-specific 16S rRNA:rDNA ratio. Primers targeting the 16S rRNA of S. natans were designed, and real-time PCR and RT-PCR were used to quantify DNA and RNA levels of S. natans, respectively. A positive linear relationship was found between the rRNA:rDNA ratio (from 440 to 4500) and the specific growth rate of S. natans (from 0.036 to 0.172 h -1 ) using chemostat experiments. The in situ growth rates of S. natans in activated sludge samples from three water reclamation facilities were quantified, illustrating how the approach can be applied in a complex environment such as activated sludge. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Resting Energy Expenditure Prediction in Recreational Athletes of 18–35 Years: Confirmation of Cunningham Equation and an Improved Weight-Based Alternative

    PubMed Central

    ten Haaf, Twan; Weijs, Peter J. M.

    2014-01-01

    Introduction Resting energy expenditure (REE) is expected to be higher in athletes because of their relatively high fat free mass (FFM). Therefore, REE predictive equation for recreational athletes may be required. The aim of this study was to validate existing REE predictive equations and to develop a new recreational athlete specific equation. Methods 90 (53M, 37F) adult athletes, exercising on average 9.1±5.0 hours a week and 5.0±1.8 times a week, were included. REE was measured using indirect calorimetry (Vmax Encore n29), FFM and FM were measured using air displacement plethysmography. Multiple linear regression analysis was used to develop a new FFM-based and weight-based REE predictive equation. The percentage accurate predictions (within 10% of measured REE), percentage bias, root mean square error and limits of agreement were calculated. Results The Cunningham equation and the new weight-based equation and the new FFM-based equation performed equally well. De Lorenzo's equation predicted REE less accurate, but better than the other generally used REE predictive equations. Harris-Benedict, WHO, Schofield, Mifflin and Owen all showed less than 50% accuracy. Conclusion For a population of (Dutch) recreational athletes, the REE can accurately be predicted with the existing Cunningham equation. Since body composition measurement is not always possible, and other generally used equations fail, the new weight-based equation is advised for use in sports nutrition. PMID:25275434

  2. Performance Models for the Spike Banded Linear System Solver

    DOE PAGES

    Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...

    2011-01-01

    With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated on diverse heterogeneous multiclusters – platforms for which performance prediction is particularly challenging. Finally, we provide predict the scalability of the Spike algorithm using up to 65,536 cores with our model. In this paper we extend the results presented in the Ninth International Symposium on Parallel and Distributed Computing.« less

  3. Application of singular value decomposition to structural dynamics systems with constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Pinson, L. D.

    1985-01-01

    Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.

  4. Protein quantitation using Ru-NHS ester tagging and isotope dilution high-pressure liquid chromatography-inductively coupled plasma mass spectrometry determination.

    PubMed

    Liu, Rui; Lv, Yi; Hou, Xiandeng; Yang, Lu; Mester, Zoltan

    2012-03-20

    An accurate, simple, and sensitive method for the direct determination of proteins by nonspecies specific isotope dilution and external calibration high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) is described. The labeling of myoglobin (17 kDa), transferrin (77 kDa), and thyroglobulin (670 kDa) proteins was accomplished in a single-step reaction with a commercially available bis(2,2'-bipyridine)-4'-methyl-4-carboxybipyridine-ruthenium N-succinimidyl ester-bis(hexafluorophosphate) (Ru-NHS ester). Using excess amounts of Ru-NHS ester compared to the protein concentration at optimized labeling conditions, constant ratios for Ru to proteins were obtained. Bioconjugate solutions containing both labeled and unlabeled proteins as well as excess Ru-NHS ester reagent were injected onto a size exclusion HPLC column for separation and ICPMS detection without any further treatment. A (99)Ru enriched spike was used for nonspecies specific ID calibration. The accuracy of the method was confirmed at various concentration levels. An average recovery of 100% ± 3% (1 standard deviation (SD), n = 9) was obtained with a typical precision of better than 5% RSD at 100 μg mL(-1) for nonspecies specific ID. Detection limits (3SD) of 1.6, 3.2, and 7.0 fmol estimated from three procedure blanks were obtained for myoglobin, transferrin, and thyroglobulin, respectively. These detection limits are suitable for the direct determination of intact proteins at trace levels. For simplicity, external calibration was also tested. Good linear correlation coefficients, 0.9901, 0.9921, and 0.9980 for myoglobin, transferrin, and thyroglobulin, respectively, were obtained. The measured concentrations of proteins in a solution were in good agreement with their volumetrically prepared values. To the best of our knowledge, this is the first application of nonspecies specific ID for the accurate and direct determination of proteins using a Ru-NHS ester labeling reagent.

  5. Application of Linearized Kalman Filter-Smoother to Aircraft Trajectory Estimation.

    DTIC Science & Technology

    1988-06-01

    the report). The kinematic relationships between wind-axis Euler angles and angular rates are given below (Etkin, 1972: 150): q w OS r w s i n* * (4...I values, and those for RP-2 were chosen in order to explore less accurate range measurements combined with more accurate angular measurements. This...was of interest because of the uncertainty in position introduced by large angular measurement uncertainties at long ranges. Finally, radar models RR

  6. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  7. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion.

    PubMed

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan

    2013-09-01

    Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can accurately measure the volume in pleural effusion patients, and a linear regression equation can be used to estimate the volume of the free pleural effusion.

  8. High Resolution Mapping of Soil Properties Using Remote Sensing Variables in South-Western Burkina Faso: A Comparison of Machine Learning and Multiple Linear Regression Models

    PubMed Central

    Welp, Gerhard; Thiel, Michael

    2017-01-01

    Accurate and detailed spatial soil information is essential for environmental modelling, risk assessment and decision making. The use of Remote Sensing data as secondary sources of information in digital soil mapping has been found to be cost effective and less time consuming compared to traditional soil mapping approaches. But the potentials of Remote Sensing data in improving knowledge of local scale soil information in West Africa have not been fully explored. This study investigated the use of high spatial resolution satellite data (RapidEye and Landsat), terrain/climatic data and laboratory analysed soil samples to map the spatial distribution of six soil properties–sand, silt, clay, cation exchange capacity (CEC), soil organic carbon (SOC) and nitrogen–in a 580 km2 agricultural watershed in south-western Burkina Faso. Four statistical prediction models–multiple linear regression (MLR), random forest regression (RFR), support vector machine (SVM), stochastic gradient boosting (SGB)–were tested and compared. Internal validation was conducted by cross validation while the predictions were validated against an independent set of soil samples considering the modelling area and an extrapolation area. Model performance statistics revealed that the machine learning techniques performed marginally better than the MLR, with the RFR providing in most cases the highest accuracy. The inability of MLR to handle non-linear relationships between dependent and independent variables was found to be a limitation in accurately predicting soil properties at unsampled locations. Satellite data acquired during ploughing or early crop development stages (e.g. May, June) were found to be the most important spectral predictors while elevation, temperature and precipitation came up as prominent terrain/climatic variables in predicting soil properties. The results further showed that shortwave infrared and near infrared channels of Landsat8 as well as soil specific indices of redness, coloration and saturation were prominent predictors in digital soil mapping. Considering the increased availability of freely available Remote Sensing data (e.g. Landsat, SRTM, Sentinels), soil information at local and regional scales in data poor regions such as West Africa can be improved with relatively little financial and human resources. PMID:28114334

  9. High Resolution Mapping of Soil Properties Using Remote Sensing Variables in South-Western Burkina Faso: A Comparison of Machine Learning and Multiple Linear Regression Models.

    PubMed

    Forkuor, Gerald; Hounkpatin, Ozias K L; Welp, Gerhard; Thiel, Michael

    2017-01-01

    Accurate and detailed spatial soil information is essential for environmental modelling, risk assessment and decision making. The use of Remote Sensing data as secondary sources of information in digital soil mapping has been found to be cost effective and less time consuming compared to traditional soil mapping approaches. But the potentials of Remote Sensing data in improving knowledge of local scale soil information in West Africa have not been fully explored. This study investigated the use of high spatial resolution satellite data (RapidEye and Landsat), terrain/climatic data and laboratory analysed soil samples to map the spatial distribution of six soil properties-sand, silt, clay, cation exchange capacity (CEC), soil organic carbon (SOC) and nitrogen-in a 580 km2 agricultural watershed in south-western Burkina Faso. Four statistical prediction models-multiple linear regression (MLR), random forest regression (RFR), support vector machine (SVM), stochastic gradient boosting (SGB)-were tested and compared. Internal validation was conducted by cross validation while the predictions were validated against an independent set of soil samples considering the modelling area and an extrapolation area. Model performance statistics revealed that the machine learning techniques performed marginally better than the MLR, with the RFR providing in most cases the highest accuracy. The inability of MLR to handle non-linear relationships between dependent and independent variables was found to be a limitation in accurately predicting soil properties at unsampled locations. Satellite data acquired during ploughing or early crop development stages (e.g. May, June) were found to be the most important spectral predictors while elevation, temperature and precipitation came up as prominent terrain/climatic variables in predicting soil properties. The results further showed that shortwave infrared and near infrared channels of Landsat8 as well as soil specific indices of redness, coloration and saturation were prominent predictors in digital soil mapping. Considering the increased availability of freely available Remote Sensing data (e.g. Landsat, SRTM, Sentinels), soil information at local and regional scales in data poor regions such as West Africa can be improved with relatively little financial and human resources.

  10. Analytical method for the accurate determination of tricothecenes in grains using LC-MS/MS: a comparison between MRM transition and MS3 quantitation.

    PubMed

    Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn

    2012-07-01

    The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are unavailable under MRM transition.

  11. Estimation of real-time runway surface contamination using flight data recorder parameters

    NASA Astrophysics Data System (ADS)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.

  12. Effect of environmental torques on short-term attitude prediction for a rolling-wheel spacecraft in a sun-synchronous orbit

    NASA Technical Reports Server (NTRS)

    Hodge, W. F.

    1972-01-01

    A numerical evaluation and an analysis of the effects of environmental disturbance torques on the attitude of a hexagonal cylinder rolling wheel spacecraft were performed. The resulting perturbations caused by five such torques were found to be very small and exhibited linearity such that linearized equations of motion yielded accurate results over short periods and the separate perturbations contributed by each torque were additive in the sense of superposition. Linearity of the torque perturbations was not affected by moderate system design changes and persisted for torque-to-angular momentum ratios up to 100 times the nominal expected value. As these conditions include many possible applications, similar linear behavior might be anticipated for other rolling-wheel spacecraft.

  13. Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage

    NASA Astrophysics Data System (ADS)

    Cepowski, Tomasz

    2017-06-01

    The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.

  14. A Technique of Treating Negative Weights in WENO Schemes

    NASA Technical Reports Server (NTRS)

    Shi, Jing; Hu, Changqing; Shu, Chi-Wang

    2000-01-01

    High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.

  15. Quasi-linear regime of gravitational instability: Implication to density-velocity relation

    NASA Technical Reports Server (NTRS)

    Shandarin, Sergei F.

    1993-01-01

    The well known linear relation between density and peculiar velocity distributions is a powerful tool for studying the large-scale structure in the Universe. Potentially it can test the gravitational instability theory and measure Omega. At present it is used in both ways: the velocity is reconstructed, provided the density is given, and vice versa. Reconstructing the density from the velocity field usually makes use of the Zel'dovich approximation. However, the standard linear approximation in Eulerian space is used when the velocity is reconstructed from the density distribution. I show that the linearized Zel'dovich approximation, in other words the linear approximation in the Lagrangian space, is more accurate for reconstructing velocity. In principle, a simple iteration technique can recover both the density and velocity distributions in Lagrangian space, but its practical application may need an additional study.

  16. Force-field prediction of materials properties in metal-organic frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, Peter G.; Moosavi, Seyed Mohamad; Witman, Matthew

    In this work, MOF bulk properties are evaluated and compared using several force fields on several well-studied MOFs, including IRMOF-1 (MOF-5), IRMOF-10, HKUST-1, and UiO-66. It is found that, surprisingly, UFF and DREIDING provide good values for the bulk modulus and linear thermal expansion coefficients for these materials, excluding those that they are not parametrized for. Force fields developed specifically for MOFs including UFF4MOF, BTW-FF, and the DWES force field are also found to provide accurate values for these materials’ properties. While we find that each force field offers a moderately good picture of these properties, noticeable deviations can bemore » observed when looking at properties sensitive to framework vibrational modes. As a result, this observation is more pronounced upon the introduction of framework charges.« less

  17. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  18. Aquatic Debris Detection Using Embedded Camera Sensors

    PubMed Central

    Wang, Yong; Wang, Dianhong; Lu, Qian; Luo, Dapeng; Fang, Wu

    2015-01-01

    Aquatic debris monitoring is of great importance to human health, aquatic habitats and water transport. In this paper, we first introduce the prototype of an aquatic sensor node equipped with an embedded camera sensor. Based on this sensing platform, we propose a fast and accurate debris detection algorithm. Our method is specifically designed based on compressive sensing theory to give full consideration to the unique challenges in aquatic environments, such as waves, swaying reflections, and tight energy budget. To upload debris images, we use an efficient sparse recovery algorithm in which only a few linear measurements need to be transmitted for image reconstruction. Besides, we implement the host software and test the debris detection algorithm on realistically deployed aquatic sensor nodes. The experimental results demonstrate that our approach is reliable and feasible for debris detection using camera sensors in aquatic environments. PMID:25647741

  19. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.

  20. An under-designed RC frame: Seismic assessment through displacement based approach and possible refurbishment with FRP strips and RC jacketing

    NASA Astrophysics Data System (ADS)

    Valente, Marco; Milani, Gabriele

    2017-07-01

    Many existing reinforced concrete buildings in Southern Europe were built (and hence designed) before the introduction of displacement based design in national seismic codes. They are obviously highly vulnerable to seismic actions. In such a situation, simplified methodologies for the seismic assessment and retrofitting of existing structures are required. In this study, a displacement based procedure using non-linear static analyses is applied to a four-story existing RC frame. The aim is to obtain an estimation of its overall structural inadequacy as well as the effectiveness of a specific retrofitting intervention by means of GFRP laminates and RC jacketing. Accurate numerical models are developed within a displacement based approach to reproduce the seismic response of the RC frame in the original configuration and after strengthening.

  1. Lung pair phantom

    DOEpatents

    Olsen, Peter C.; Gordon, N. Ross; Simmons, Kevin L.

    1993-01-01

    The present invention is a material and method of making the material that exhibits improved radiation attenuation simulation of real lungs, i.e., an "authentic lung tissue" or ALT phantom. Specifically, the ALT phantom is a two-part polyurethane medium density foam mixed with calcium carbonate, potassium carbonate if needed for K-40 background, lanthanum nitrate, acetone, and a nitrate or chloride form of a radionuclide. This formulation is found to closely match chemical composition and linear attenuation of real lungs. The ALT phantom material is made according to established procedures but without adding foaming agents or preparing thixotropic concentrate and with a modification for ensuring uniformity of density of the ALT phantom that is necessary for accurate simulation. The modification is that the polyurethane chemicals are mixed at a low temperature prior to pouring the polyurethane mixture into the mold.

  2. [Quality control of Maca (Lepidium meyenii)].

    PubMed

    Shu, Ji-cheng; Cui, Hang-qing; Huang, Ying-zheng; Huang, Xiao-ying; Yang, Ming

    2015-12-01

    To control the quality of Maca, the quality standard was established in this study. According to the methods recorded in the Appendix of Chinese Pharmacopoeia (2010 Edition), the water, extract, total ash, acid insoluble substance, and heavy metals inspections in Lepidium meyenii were carried out. N-benzyl-9Z, 12Z-octadecadienamide in L. meyenii was identified by TLC, and it was determined by HPLC. The results showed that the N-benzyl-9Z, 12Z-octadecadienamide identification of TLC was a strong mark and specificity. In content determination experiment, the linearity of N-benzyl-9Z, 12Z-octadecadienamide was in the range of 0.01-2 microg (r = 0.9998), and the average recovery (n=9) was 99.27% (RSD 2.0%). The methods were simple, accurate, with good reproducibility. It is suitable for quality control L. meyenii.

  3. Lung pair phantom

    DOEpatents

    Olsen, P.C.; Gordon, N.R.; Simmons, K.L.

    1993-11-30

    The present invention is a material and method of making the material that exhibits improved radiation attenuation simulation of real lungs, i.e., an ``authentic lung tissue`` or ALT phantom. Specifically, the ALT phantom is a two-part polyurethane medium density foam mixed with calcium carbonate, potassium carbonate if needed for K-40 background, lanthanum nitrate, acetone, and a nitrate or chloride form of a radionuclide. This formulation is found to closely match chemical composition and linear attenuation of real lungs. The ALT phantom material is made according to established procedures but without adding foaming agents or preparing thixotropic concentrate and with a modification for ensuring uniformity of density of the ALT phantom that is necessary for accurate simulation. The modification is that the polyurethane chemicals are mixed at a low temperature prior to pouring the polyurethane mixture into the mold.

  4. Ultra-High Performance Liquid Chromatography (UHPLC) Method for the Determination of Limonene in Sweet Orange (Citrus sinensis) Oil: Implications for Limonene Stability.

    PubMed

    Bernart, Matthew W

    2015-01-01

    The citrus-derived bioactive monoterpene limonene is an important industrial commodity and fragrance constituent. An RP isocratic elution C18 ultra-HPLC (UHPLC) method using a superficially porous stationary phase and photodiode array (PDA) detector has been developed for determining the limonene content of sweet orange (Citrus sinensis) oil. The method is fast with a cycle time of 1.2 min, linear, precise, accurate, specific, and stability indicating, and it satisfies U.S. Pharmacopeia suitability parameters. The method may be useful in its present form for limonene processing, or modified for research on more polar compounds of the terpenome. A forced-degradation experiment showed that limonene is degraded by heat in hydro-ethanolic solution. PDA detection facilitates classification of minor components of the essential oil, including β-myrcene.

  5. Development of a head impact monitoring "Intelligent Mouthguard".

    PubMed

    Hedin, Daniel S; Gibson, Paul L; Bartsch, Adam J; Samorezov, Sergey

    2016-08-01

    The authors present the development and laboratory system-level testing of an impact monitoring "Intelligent Mouthguard" intended to help with identification of potentially concussive head impacts and cumulative head impact dosage. The goal of Intelligent Mouthguard is to provide an indicator of potential concussion risk, and help caregiver identify athletes needing sideline concussion protocol testing. Intelligent Mouthguard may also help identify individuals who are at higher risk based on historical dosage. Intelligent Mouthguard integrates inertial sensors to provide 3-degree of freedom linear and rotational kinematics. The electronics are fully integrated into a custom mouthguard that couples tightly to the upper teeth. The combination of tight coupling and highly accurate sensor data means the Intelligent Mouthguard meets the National Football League (NFL) Level I validity specification based on laboratory system-level test data presented in this study.

  6. A novel disturbance-observer based friction compensation scheme for ball and plate system.

    PubMed

    Wang, Yongkun; Sun, Mingwei; Wang, Zenghui; Liu, Zhongxin; Chen, Zengqiang

    2014-03-01

    Friction is often ignored when designing a controller for the ball and plate system, which can lead to steady-error and stick-slip phenomena, especially for the small amplitude command. It is difficult to achieve high-precision control performance for the ball and plate system because of its friction. A novel reference compensation strategy is presented to attenuate the aftereffects caused by the friction. To realize this strategy, a linear control law is proposed based on a reduced-order observer. Neither the accurate friction model nor the estimation of specific characteristic parameters is needed in this design. Moreover, the describing function method illustrates that the limit cycle can be avoided. Finally, the comparative mathematical simulations and the practical experiments are used to validate the effectiveness of the proposed method. © 2013 ISA Published by ISA All rights reserved.

  7. Force-field prediction of materials properties in metal-organic frameworks

    DOE PAGES

    Boyd, Peter G.; Moosavi, Seyed Mohamad; Witman, Matthew; ...

    2016-12-23

    In this work, MOF bulk properties are evaluated and compared using several force fields on several well-studied MOFs, including IRMOF-1 (MOF-5), IRMOF-10, HKUST-1, and UiO-66. It is found that, surprisingly, UFF and DREIDING provide good values for the bulk modulus and linear thermal expansion coefficients for these materials, excluding those that they are not parametrized for. Force fields developed specifically for MOFs including UFF4MOF, BTW-FF, and the DWES force field are also found to provide accurate values for these materials’ properties. While we find that each force field offers a moderately good picture of these properties, noticeable deviations can bemore » observed when looking at properties sensitive to framework vibrational modes. As a result, this observation is more pronounced upon the introduction of framework charges.« less

  8. Capture cross sections on unstable nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonchev, A. P.; Escher, J. E.; Scielzo, N.

    2017-09-13

    Accurate neutron-capture cross sections on unstable nuclei near the line of beta stability are crucial for understanding the s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. Essential ingredients for describing the γ decays following neutron capture are the γ-ray strength function and level densities. We will compare different indirect approaches for obtaining the most relevant observables that can constrain Hauser-Feshbach statistical-model calculations of capture cross sections. Specifically, we will consider photon scattering using monoenergetic and 100% linearly polarized photonmore » beams. Here, challenges that exist on the path to obtaining neutron-capture cross sections for reactions on isotopes near and far from stability will be discussed.« less

  9. A comparison of different chemometrics approaches for the robust classification of electronic nose data.

    PubMed

    Gromski, Piotr S; Correa, Elon; Vaughan, Andrew A; Wedge, David C; Turner, Michael L; Goodacre, Royston

    2014-11-01

    Accurate detection of certain chemical vapours is important, as these may be diagnostic for the presence of weapons, drugs of misuse or disease. In order to achieve this, chemical sensors could be deployed remotely. However, the readout from such sensors is a multivariate pattern, and this needs to be interpreted robustly using powerful supervised learning methods. Therefore, in this study, we compared the classification accuracy of four pattern recognition algorithms which include linear discriminant analysis (LDA), partial least squares-discriminant analysis (PLS-DA), random forests (RF) and support vector machines (SVM) which employed four different kernels. For this purpose, we have used electronic nose (e-nose) sensor data (Wedge et al., Sensors Actuators B Chem 143:365-372, 2009). In order to allow direct comparison between our four different algorithms, we employed two model validation procedures based on either 10-fold cross-validation or bootstrapping. The results show that LDA (91.56% accuracy) and SVM with a polynomial kernel (91.66% accuracy) were very effective at analysing these e-nose data. These two models gave superior prediction accuracy, sensitivity and specificity in comparison to the other techniques employed. With respect to the e-nose sensor data studied here, our findings recommend that SVM with a polynomial kernel should be favoured as a classification method over the other statistical models that we assessed. SVM with non-linear kernels have the advantage that they can be used for classifying non-linear as well as linear mapping from analytical data space to multi-group classifications and would thus be a suitable algorithm for the analysis of most e-nose sensor data.

  10. Intervertebral disc response to cyclic loading--an animal model.

    PubMed

    Ekström, L; Kaigle, A; Hult, E; Holm, S; Rostedt, M; Hansson, T

    1996-01-01

    The viscoelastic response of a lumbar motion segment loaded in cyclic compression was studied in an in vivo porcine model (N = 7). Using surgical techniques, a miniaturized servohydraulic exciter was attached to the L2-L3 motion segment via pedicle fixation. A dynamic loading scheme was implemented, which consisted of one hour of sinusoidal vibration at 5 Hz, 50 N peak load, followed by one hour of restitution at zero load and one hour of sinusoidal vibration at 5 Hz, 100 N peak load. The force and displacement responses of the motion segment were sampled at 25 Hz. The experimental data were used for evaluating the parameters of two viscoelastic models: a standard linear solid model (three-parameter) and a linear Burger's fluid model (four-parameter). In this study, the creep behaviour under sinusoidal vibration at 5 Hz closely resembled the creep behaviour under static loading observed in previous studies. Expanding the three-parameter solid model into a four-parameter fluid model made it possible to separate out a progressive linear displacement term. This deformation was not fully recovered during restitution and is therefore an indication of a specific effect caused by the cyclic loading. High variability was observed in the parameters determined from the 50 N experimental data, particularly for the elastic modulus E1. However, at the 100 N load level, significant differences between the models were found. Both models accurately predicted the creep response under the first 800 s of 100 N loading, as displayed by mean absolute errors for the calculated deformation data from the experimental data of 1.26 and 0.97 percent for the solid and fluid models respectively. The linear Burger's fluid model, however, yielded superior predictions particularly for the initial elastic response.

  11. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    NASA Astrophysics Data System (ADS)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically highlight the consideration of conceptual model uncertainty.

  12. Flexible polyelectrolyte chain in a strong electrolyte solution: Insight into equilibrium properties and force-extension behavior from mesoscale simulation

    NASA Astrophysics Data System (ADS)

    Malekzadeh Moghani, Mahdy; Khomami, Bamin

    2016-01-01

    Macromolecules with ionizable groups are ubiquitous in biological and synthetic systems. Due to the complex interaction between chain and electrostatic decorrelation lengths, both equilibrium properties and micro-mechanical response of dilute solutions of polyelectrolytes (PEs) are more complex than their neutral counterparts. In this work, the bead-rod micromechanical description of a chain is used to perform hi-fidelity Brownian dynamics simulation of dilute PE solutions to ascertain the self-similar equilibrium behavior of PE chains with various linear charge densities, scaling of the Kuhn step length (lE) with salt concentration cs and the force-extension behavior of the PE chain. In accord with earlier theoretical predictions, our results indicate that for a chain with n Kuhn segments, lE ˜ cs-0.5 as linear charge density approaches 1/n. Moreover, the constant force ensemble simulation results accurately predict the initial non-linear force-extension region of PE chain recently measured via single chain experiments. Finally, inspired by Cohen's extraction of Warner's force law from the inverse Langevin force law, a novel numerical scheme is developed to extract a new elastic force law for real chains from our discrete set of force-extension data similar to Padè expansion, which accurately depicts the initial non-linear region where the total Kuhn length is less than the thermal screening length.

  13. Flexible polyelectrolyte chain in a strong electrolyte solution: Insight into equilibrium properties and force-extension behavior from mesoscale simulation.

    PubMed

    Malekzadeh Moghani, Mahdy; Khomami, Bamin

    2016-01-14

    Macromolecules with ionizable groups are ubiquitous in biological and synthetic systems. Due to the complex interaction between chain and electrostatic decorrelation lengths, both equilibrium properties and micro-mechanical response of dilute solutions of polyelectrolytes (PEs) are more complex than their neutral counterparts. In this work, the bead-rod micromechanical description of a chain is used to perform hi-fidelity Brownian dynamics simulation of dilute PE solutions to ascertain the self-similar equilibrium behavior of PE chains with various linear charge densities, scaling of the Kuhn step length (lE) with salt concentration cs and the force-extension behavior of the PE chain. In accord with earlier theoretical predictions, our results indicate that for a chain with n Kuhn segments, lE ∼ cs (-0.5) as linear charge density approaches 1/n. Moreover, the constant force ensemble simulation results accurately predict the initial non-linear force-extension region of PE chain recently measured via single chain experiments. Finally, inspired by Cohen's extraction of Warner's force law from the inverse Langevin force law, a novel numerical scheme is developed to extract a new elastic force law for real chains from our discrete set of force-extension data similar to Padè expansion, which accurately depicts the initial non-linear region where the total Kuhn length is less than the thermal screening length.

  14. [Screening and confirmation of 24 hormones in cosmetics by ultra high performance liquid chromatography-linear ion trap/orbitrap high resolution mass spectrometry].

    PubMed

    Li, Zhaoyong; Wang, Fengmei; Niu, Zengyuan; Luo, Xin; Zhang, Gang; Chen, Junhui

    2014-05-01

    A method of ultra high performance liquid chromatography-linear ion trap/orbitrap high resolution mass spectrometry (UPLC-LTQ/Orbitrap MS) was established to screen and confirm 24 hormones in cosmetics. Various cosmetic samples were extracted with methanol. The extract was loaded onto a Waters ACQUITY UPLC BEH C18 column (50 mm x 2.1 mm, 1.7 microm) using a gradient elution of acetonitrile/water containing 0.1% (v/v) formic acid for the separation. The accurate mass of quasi-molecular ion was acquired by full scanning of electrostatic field orbitrap. The rapid screening was carried out by the accurate mass of quasi-molecular ion. The confirmation analysis for targeted compounds was performed with the retention time and qualitative fragments obtained by data dependent scan mode. Under the optimal conditions, the 24 hormones were routinely detected with mass accuracy error below 3 x 10(-6) (3 ppm), and good linearities were obtained in their respective linear ranges with correlation coefficients higher than 0.99. The LODs (S/N = 3) of the 24 compounds were < or = 10 microg/kg, which can meet the requirements for the actual screening of cosmetic samples. The developed method was applied to screen the hormones in 50 cosmetic samples. The results demonstrate that the method is a useful tool for the rapid screening and identification of the hormones in cosmetics.

  15. Evaluation of the Linear Aerospike SR-71 Experiment (LASRE) Oxygen Sensor

    NASA Technical Reports Server (NTRS)

    Ennix, Kimberly A.; Corpening, Griffin P.; Jarvis, Michele; Chiles, Harry R.

    1999-01-01

    The Linear Aerospike SR-71 Experiment (LASRE) was a propulsion flight experiment for advanced space vehicles such as the X-33 and reusable launch vehicle. A linear aerospike rocket engine was integrated into a semi-span of an X-33-like lifting body shape (model), and carried on top of an SR-71 aircraft at NASA Dryden Flight Research Center. Because no flight data existed for aerospike nozzles, the primary objective of the LASRE flight experiment was to evaluate flight effects on the engine performance over a range of altitudes and Mach numbers. Because it contained a large quantity of energy in the form of fuel, oxidizer, hypergolics, and gases at very high pressures, the LASRE propulsion system posed a major hazard for fire or explosion. Therefore, a propulsion-hazard mitigation system was created for LASRE that included a nitrogen purge system. Oxygen sensors were a critical part of the nitrogen purge system because they measured purge operation and effectiveness. Because the available oxygen sensors were not designed for flight testing, a laboratory study investigated oxygen-sensor characteristics and accuracy over a range of altitudes and oxygen concentrations. Laboratory test data made it possible to properly calibrate the sensors for flight. Such data also provided a more accurate error prediction than the manufacturer's specification. This predictive accuracy increased confidence in the sensor output during critical phases of the flight. This paper presents the findings of this laboratory test.

  16. Chloride and salicylate influence prestin-dependent specific membrane capacitance: support for the area motor model.

    PubMed

    Santos-Sacchi, Joseph; Song, Lei

    2014-04-11

    The outer hair cell is electromotile, its membrane motor identified as the protein SLC26a5 (prestin). An area motor model, based on two-state Boltzmann statistics, was developed about two decades ago and derives from the observation that outer hair cell surface area is voltage-dependent. Indeed, aside from the nonlinear capacitance imparted by the voltage sensor charge movement of prestin, linear capacitance (Clin) also displays voltage dependence as motors move between expanded and compact states. Naturally, motor surface area changes alter membrane capacitance. Unit linear motor capacitance fluctuation (δCsa) is on the order of 140 zeptofarads. A recent three-state model of prestin provides an alternative view, suggesting that voltage-dependent linear capacitance changes are not real but only apparent because the two component Boltzmann functions shift their midpoint voltages (Vh) in opposite directions during treatment with salicylate, a known competitor of required chloride binding. We show here using manipulations of nonlinear capacitance with both salicylate and chloride that an enhanced area motor model, including augmented δCsa by salicylate, can accurately account for our novel findings. We also show that although the three-state model implicitly avoids measuring voltage-dependent motor capacitance, it registers δCsa effects as a byproduct of its assessment of Clin, which increases during salicylate treatment as motors are locked in the expanded state. The area motor model, in contrast, captures the characteristics of the voltage dependence of δCsa, leading to a better understanding of prestin.

  17. The evolution of kicked stellar-mass black holes in star cluster environments

    NASA Astrophysics Data System (ADS)

    Webb, Jeremy J.; Leigh, Nathan W. C.; Singh, Abhishek; Ford, K. E. Saavik; McKernan, Barry; Bellovary, Jillian

    2018-03-01

    We consider how dynamical friction acts on black holes that receive a velocity kick while located at the centre of a gravitational potential, analogous to a star cluster, due to either a natal kick or the anisotropic emission of gravitational waves during a black hole-black hole merger. Our investigation specifically focuses on how well various Chandrasekhar-based dynamical friction models can predict the orbital decay of kicked black holes with mbh ≲ 100 M⊙ due to an inhomogeneous background stellar field. In general, the orbital evolution of a kicked black hole follows that of a damped oscillator where two-body encounters and dynamical friction serve as sources of damping. However, we find models for approximating the effects of dynamical friction do not accurately predict the amount of energy lost by the black hole if the initial kick velocity vk is greater than the stellar velocity dispersion σ. For all kick velocities, we also find that two-body encounters with nearby stars can cause the energy evolution of a kicked BH to stray significantly from standard dynamical friction theory as encounters can sometimes lead to an energy gain. For larger kick velocities, we find the orbital decay of a black hole departs from classical theory completely as the black hole's orbital amplitude decays linearly with time as opposed to exponentially. Therefore, we have developed a linear decay formalism, which scales linearly with black hole mass and v_k/σ in order to account for the variations in the local gravitational potential.

  18. Spatially explicit estimates of N2 O emissions from croplands suggest climate mitigation opportunities from improved fertilizer management.

    PubMed

    Gerber, James S; Carlson, Kimberly M; Makowski, David; Mueller, Nathaniel D; Garcia de Cortazar-Atauri, Iñaki; Havlík, Petr; Herrero, Mario; Launay, Marie; O'Connell, Christine S; Smith, Pete; West, Paul C

    2016-10-01

    With increasing nitrogen (N) application to croplands required to support growing food demand, mitigating N2 O emissions from agricultural soils is a global challenge. National greenhouse gas emissions accounting typically estimates N2 O emissions at the country scale by aggregating all crops, under the assumption that N2 O emissions are linearly related to N application. However, field studies and meta-analyses indicate a nonlinear relationship, in which N2 O emissions are relatively greater at higher N application rates. Here, we apply a super-linear emissions response model to crop-specific, spatially explicit synthetic N fertilizer and manure N inputs to provide subnational accounting of global N2 O emissions from croplands. We estimate 0.66 Tg of N2 O-N direct global emissions circa 2000, with 50% of emissions concentrated in 13% of harvested area. Compared to estimates from the IPCC Tier 1 linear model, our updated N2 O emissions range from 20% to 40% lower throughout sub-Saharan Africa and Eastern Europe, to >120% greater in some Western European countries. At low N application rates, the weak nonlinear response of N2 O emissions suggests that relatively large increases in N fertilizer application would generate relatively small increases in N2 O emissions. As aggregated fertilizer data generate underestimation bias in nonlinear models, high-resolution N application data are critical to support accurate N2 O emissions estimates. © 2016 John Wiley & Sons Ltd.

  19. Sensitive bridge circuit measures conductance of low-conductivity electrolyte solutions

    NASA Technical Reports Server (NTRS)

    Schmidt, K.

    1967-01-01

    Compact bridge circuit measures sensitive and accurate conductance of low-conductivity electrolyte solutions. The bridge utilizes a phase sensitive detector to obtain a linear deflection of the null indicator relative to the measured conductance.

  20. Strain Measurement - Unidirectional.

    DTIC Science & Technology

    1983-04-20

    of vital importance in obtaining accurate reproducible measurements. This preparation should develop a chemically clean surface having a roughness...testing weapon systems, when parts are stressed to the highest permissible values, and in measuring linear and torsional strains engen - dered by the

  1. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  2. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  3. Accurate and scalable social recommendation using mixed-membership stochastic block models.

    PubMed

    Godoy-Lorite, Antonia; Guimerà, Roger; Moore, Cristopher; Sales-Pardo, Marta

    2016-12-13

    With increasing amounts of information available, modeling and predicting user preferences-for books or articles, for example-are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users' ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user's and item's groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets.

  4. On a generalized laminate theory with application to bending, vibration, and delamination buckling in composite laminates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbero, E.J.

    1989-01-01

    In this study, a computational model for accurate analysis of composite laminates and laminates with including delaminated interfaces is developed. An accurate prediction of stress distributions, including interlaminar stresses, is obtained by using the Generalized Laminate Plate Theory of Reddy in which layer-wise linear approximation of the displacements through the thickness is used. Analytical as well as finite-element solutions of the theory are developed for bending and vibrations of laminated composite plates for the linear theory. Geometrical nonlinearity, including buckling and postbuckling are included and used to perform stress analysis of laminated plates. A general two dimensional theory of laminatedmore » cylindrical shells is also developed in this study. Geometrical nonlinearity and transverse compressibility are included. Delaminations between layers of composite plates are modelled by jump discontinuity conditions at the interfaces. The theory includes multiple delaminations through the thickness. Geometric nonlinearity is included to capture layer buckling. The strain energy release rate distribution along the boundary of delaminations is computed by a novel algorithm. The computational models presented herein are accurate for global behavior and particularly appropriate for the study of local effects.« less

  5. Accurate and scalable social recommendation using mixed-membership stochastic block models

    PubMed Central

    Godoy-Lorite, Antonia; Moore, Cristopher

    2016-01-01

    With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets. PMID:27911773

  6. Communication: modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G

    2014-10-07

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley "bracelet" and "rod" test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, "Charge asymmetries in hydration of polar solutes," J. Phys. Chem. B 112, 2405-2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.

  7. Scanning moiré and spatial-offset phase-stepping for surface inspection of structures

    NASA Astrophysics Data System (ADS)

    Yoneyama, S.; Morimoto, Y.; Fujigaki, M.; Ikeda, Y.

    2005-06-01

    In order to develop a high-speed and accurate surface inspection system of structures such as tunnels, a new surface profile measurement method using linear array sensors is studied. The sinusoidal grating is projected on a structure surface. Then, the deformed grating is scanned by linear array sensors that move together with the grating projector. The phase of the grating is analyzed by a spatial offset phase-stepping method to perform accurate measurement. The surface profile measurements of the wall with bricks and the concrete surface of a structure are demonstrated using the proposed method. The change of geometry or fabric of structures and the defects on structure surfaces can be detected by the proposed method. It is expected that the surface profile inspection system of tunnels measuring from a running train can be constructed based on the proposed method.

  8. Linear segmentation algorithm for detecting layer boundary with lidar.

    PubMed

    Mao, Feiyue; Gong, Wei; Logan, Timothy

    2013-11-04

    The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.

  9. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    PubMed Central

    Bardhan, Jaydeep P.; Knepley, Matthew G.

    2014-01-01

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry. PMID:25296776

  10. Heterodyne interferometry method for calibration of a Soleil-Babinet compensator.

    PubMed

    Zhang, Wenjing; Zhang, Zhiwei

    2016-05-20

    A method based on the common-path heterodyne interferometer system is proposed for the calibration of a Soleil-Babinet compensator. In this heterodyne interferometer system, which consists of two acousto-optic modulators, the compensator being calibrated is inserted into the signal path. By using the reference beam as the benchmark and a lock-in amplifier (SR844) as the phase retardation collector, retardations of 0 and λ (one wavelength) can be located accurately, and an arbitrary retardation between 0 and λ can also be measured accurately and continuously. By fitting a straight line to the experimental data, we obtained a linear correlation coefficient (R) of 0.995, which indicates that this system is capable of linear phase detection. The experimental results demonstrate determination accuracies of 0.212° and 0.26° and measurement precisions of 0.054° and 0.608° for retardations of 0 and λ, respectively.

  11. Novel methods for Solving Economic Dispatch of Security-Constrained Unit Commitment Based on Linear Programming

    NASA Astrophysics Data System (ADS)

    Guo, Sangang

    2017-09-01

    There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.

  12. Note: Nonpolar solute partial molar volume response to attractive interactions with water.

    PubMed

    Williams, Steven M; Ashbaugh, Henry S

    2014-01-07

    The impact of attractive interactions on the partial molar volumes of methane-like solutes in water is characterized using molecular simulations. Attractions account for a significant 20% volume drop between a repulsive Weeks-Chandler-Andersen and full Lennard-Jones description of methane interactions. The response of the volume to interaction perturbations is characterized by linear fits to our simulations and a rigorous statistical thermodynamic expression for the derivative of the volume to increasing attractions. While a weak non-linear response is observed, an average effective slope accurately captures the volume decrease. This response, however, is anticipated to become more non-linear with increasing solute size.

  13. Scintillation decay time and pulse shape discrimination in oxygenated and deoxygenated solutions of linear alkylbenzene for the SNO+ experiment

    NASA Astrophysics Data System (ADS)

    O'Keeffe, H. M.; O'Sullivan, E.; Chen, M. C.

    2011-06-01

    The SNO+ liquid scintillator experiment is under construction in the SNOLAB facility in Canada. The success of this experiment relies upon accurate characterization of the liquid scintillator, linear alkylbenzene (LAB). In this paper, scintillation decay times for alpha and electron excitations in LAB with 2 g/L PPO are presented for both oxygenated and deoxygenated solutions. While deoxygenation is expected to improve pulse shape discrimination in liquid scintillators, it is not commonly demonstrated in the literature. This paper shows that for linear alkylbenzene, deoxygenation improves discrimination between electron and alpha excitations in the scintillator.

  14. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  15. Note: Nonpolar solute partial molar volume response to attractive interactions with water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Steven M.; Ashbaugh, Henry S., E-mail: hanka@tulane.edu

    2014-01-07

    The impact of attractive interactions on the partial molar volumes of methane-like solutes in water is characterized using molecular simulations. Attractions account for a significant 20% volume drop between a repulsive Weeks-Chandler-Andersen and full Lennard-Jones description of methane interactions. The response of the volume to interaction perturbations is characterized by linear fits to our simulations and a rigorous statistical thermodynamic expression for the derivative of the volume to increasing attractions. While a weak non-linear response is observed, an average effective slope accurately captures the volume decrease. This response, however, is anticipated to become more non-linear with increasing solute size.

  16. Laguerre-based method for analysis of time-resolved fluorescence data: application to in-vivo characterization and diagnosis of atherosclerotic lesions.

    PubMed

    Jo, Javier A; Fang, Qiyin; Papaioannou, Thanassis; Baker, J Dennis; Dorafshar, Amir H; Reil, Todd; Qiao, Jian-Hua; Fishbein, Michael C; Freischlag, Julie A; Marcu, Laura

    2006-01-01

    We report the application of the Laguerre deconvolution technique (LDT) to the analysis of in-vivo time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data and the diagnosis of atherosclerotic plaques. TR-LIFS measurements were obtained in vivo from normal and atherosclerotic aortas (eight rabbits, 73 areas), and subsequently analyzed using LDT. Spectral and time-resolved features were used to develop four classification algorithms: linear discriminant analysis (LDA), stepwise LDA (SLDA), principal component analysis (PCA), and artificial neural network (ANN). Accurate deconvolution of TR-LIFS in-vivo measurements from normal and atherosclerotic arteries was provided by LDT. The derived Laguerre expansion coefficients reflected changes in the arterial biochemical composition, and provided a means to discriminate lesions rich in macrophages with high sensitivity (>85%) and specificity (>95%). Classification algorithms (SLDA and PCA) using a selected number of features with maximum discriminating power provided the best performance. This study demonstrates the potential of the LDT for in-vivo tissue diagnosis, and specifically for the detection of macrophages infiltration in atherosclerotic lesions, a key marker of plaque vulnerability.

  17. Laguerre-based method for analysis of time-resolved fluorescence data: application to in-vivo characterization and diagnosis of atherosclerotic lesions

    NASA Astrophysics Data System (ADS)

    Jo, Javier A.; Fang, Qiyin; Papaioannou, Thanassis; Baker, J. Dennis; Dorafshar, Amir; Reil, Todd; Qiao, Jianhua; Fishbein, Michael C.; Freischlag, Julie A.; Marcu, Laura

    2006-03-01

    We report the application of the Laguerre deconvolution technique (LDT) to the analysis of in-vivo time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data and the diagnosis of atherosclerotic plaques. TR-LIFS measurements were obtained in vivo from normal and atherosclerotic aortas (eight rabbits, 73 areas), and subsequently analyzed using LDT. Spectral and time-resolved features were used to develop four classification algorithms: linear discriminant analysis (LDA), stepwise LDA (SLDA), principal component analysis (PCA), and artificial neural network (ANN). Accurate deconvolution of TR-LIFS in-vivo measurements from normal and atherosclerotic arteries was provided by LDT. The derived Laguerre expansion coefficients reflected changes in the arterial biochemical composition, and provided a means to discriminate lesions rich in macrophages with high sensitivity (>85%) and specificity (>95%). Classification algorithms (SLDA and PCA) using a selected number of features with maximum discriminating power provided the best performance. This study demonstrates the potential of the LDT for in-vivo tissue diagnosis, and specifically for the detection of macrophages infiltration in atherosclerotic lesions, a key marker of plaque vulnerability.

  18. Laguerre-based method for analysis of time-resolved fluorescence data: application to in-vivo characterization and diagnosis of atherosclerotic lesions

    PubMed Central

    Jo, Javier A.; Fang, Qiyin; Papaioannou, Thanassis; Baker, J. Dennis; Dorafshar, Amir H.; Reil, Todd; Qiao, Jian-Hua; Fishbein, Michael C.; Freischlag, Julie A.; Marcu, Laura

    2007-01-01

    We report the application of the Laguerre deconvolution technique (LDT) to the analysis of in-vivo time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data and the diagnosis of atherosclerotic plaques. TR-LIFS measurements were obtained in vivo from normal and atherosclerotic aortas (eight rabbits, 73 areas), and subsequently analyzed using LDT. Spectral and time-resolved features were used to develop four classification algorithms: linear discriminant analysis (LDA), stepwise LDA (SLDA), principal component analysis (PCA), and artificial neural network (ANN). Accurate deconvolution of TR-LIFS in-vivo measurements from normal and atherosclerotic arteries was provided by LDT. The derived Laguerre expansion coefficients reflected changes in the arterial biochemical composition, and provided a means to discriminate lesions rich in macrophages with high sensitivity (>85%) and specificity (>95%). Classification algorithms (SLDA and PCA) using a selected number of features with maximum discriminating power provided the best performance. This study demonstrates the potential of the LDT for in-vivo tissue diagnosis, and specifically for the detection of macrophages infiltration in atherosclerotic lesions, a key marker of plaque vulnerability. PMID:16674179

  19. A study of selective spectrophotometric methods for simultaneous determination of Itopride hydrochloride and Rabeprazole sodium binary mixture: Resolving sever overlapping spectra

    NASA Astrophysics Data System (ADS)

    Mohamed, Heba M.

    2015-02-01

    Itopride hydrochloride (IT) and Rabeprazole sodium (RB) are co-formulated together for the treatment of gastro-esophageal reflux disease. Three simple, specific and accurate spectrophotometric methods were applied and validated for simultaneous determination of Itopride hydrochloride (IT) and Rabeprazole sodium (RB) namely; constant center (CC), ratio difference (RD) and mean centering of ratio spectra (MCR) spectrophotometric methods. Linear correlations were obtained in range of 10-110 μg/μL for Itopride hydrochloride and 4-44 μg/mL for Rabeprazole sodium. No preliminary separation steps were required prior the analysis of the two drugs using the proposed methods. Specificity was investigated by analyzing the synthetic mixtures containing the two cited drugs and their capsules dosage form. The obtained results were statistically compared with those obtained by the reported method, no significant difference was obtained with respect to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for IT and RB.

  20. A study of selective spectrophotometric methods for simultaneous determination of Itopride hydrochloride and Rabeprazole sodium binary mixture: Resolving sever overlapping spectra.

    PubMed

    Mohamed, Heba M

    2015-02-05

    Itopride hydrochloride (IT) and Rabeprazole sodium (RB) are co-formulated together for the treatment of gastro-esophageal reflux disease. Three simple, specific and accurate spectrophotometric methods were applied and validated for simultaneous determination of Itopride hydrochloride (IT) and Rabeprazole sodium (RB) namely; constant center (CC), ratio difference (RD) and mean centering of ratio spectra (MCR) spectrophotometric methods. Linear correlations were obtained in range of 10-110μg/μL for Itopride hydrochloride and 4-44μg/mL for Rabeprazole sodium. No preliminary separation steps were required prior the analysis of the two drugs using the proposed methods. Specificity was investigated by analyzing the synthetic mixtures containing the two cited drugs and their capsules dosage form. The obtained results were statistically compared with those obtained by the reported method, no significant difference was obtained with respect to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for IT and RB. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Validation of finite element model of transcranial electrical stimulation using scalp potentials: implications for clinical dose

    NASA Astrophysics Data System (ADS)

    Datta, Abhishek; Zhou, Xiang; Su, Yuzhou; Parra, Lucas C.; Bikson, Marom

    2013-06-01

    Objective. During transcranial electrical stimulation, current passage across the scalp generates voltage across the scalp surface. The goal was to characterize these scalp voltages for the purpose of validating subject-specific finite element method (FEM) models of current flow. Approach. Using a recording electrode array, we mapped skin voltages resulting from low-intensity transcranial electrical stimulation. These voltage recordings were used to compare the predictions obtained from the high-resolution model based on the subject undergoing transcranial stimulation. Main results. Each of the four stimulation electrode configurations tested resulted in a distinct distribution of scalp voltages; these spatial maps were linear with applied current amplitude (0.1 to 1 mA) over low frequencies (1 to 10 Hz). The FEM model accurately predicted the distinct voltage distributions and correlated the induced scalp voltages with current flow through cortex. Significance. Our results provide the first direct model validation for these subject-specific modeling approaches. In addition, the monitoring of scalp voltages may be used to verify electrode placement to increase transcranial electrical stimulation safety and reproducibility.

  2. Novel spectrophotometric methods for simultaneous determination of timolol and dorzolamide in their binary mixture.

    PubMed

    Lotfy, Hayam Mahmoud; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom

    2014-05-21

    Two smart and novel spectrophotometric methods namely; absorbance subtraction (AS) and amplitude modulation (AM) were developed and validated for the determination of a binary mixture of timolol maleate (TIM) and dorzolamide hydrochloride (DOR) in presence of benzalkonium chloride without prior separation, using unified regression equation. Additionally, simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of the binary mixture namely; simultaneous ratio subtraction (SRS), ratio difference (RD), ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), constant multiplication method (CM) and mean centering of ratio spectra (MCR). The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there is no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Initial conditions for accurate N-body simulations of massive neutrino cosmologies

    NASA Astrophysics Data System (ADS)

    Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.

    2017-04-01

    The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.

  4. Linearity and sex-specificity of impact force prediction during a fall onto the outstretched hand using a single-damper-model.

    PubMed

    Kawalilak, C E; Lanovaz, J L; Johnston, J D; Kontulainen, S A

    2014-09-01

    To assess the linearity and sex-specificity of damping coefficients used in a single-damper-model (SDM) when predicting impact forces during the worst-case falling scenario from fall heights up to 25 cm. Using 3-dimensional motion tracking and an integrated force plate, impact forces and impact velocities were assessed from 10 young adults (5 males; 5 females), falling from planted knees onto outstretched arms, from a random order of drop heights: 3, 5, 7, 10, 15, 20, and 25 cm. We assessed the linearity and sex-specificity between impact forces and impact velocities across all fall heights using analysis of variance linearity test and linear regression, respectively. Significance was accepted at P<0.05. Association between impact forces and impact velocities up to 25 cm was linear (P=0.02). Damping coefficients appeared sex-specific (males: 627 Ns/m, R(2)=0.70; females: 421 Ns/m; R(2)=0.81; sex combined: 532 Ns/m, R(2)=0.61). A linear damping coefficient used in the SDM proved valid for predicting impact forces from fall heights up to 25 cm. RESULTS suggested the use of sex-specific damping coefficients when estimating impact force using the SDM and calculating the factor-of-risk for wrist fractures.

  5. High-order accurate finite-volume formulations for the pressure gradient force in layered ocean models

    NASA Astrophysics Data System (ADS)

    Engwirda, Darren; Kelley, Maxwell; Marshall, John

    2017-08-01

    Discretisation of the horizontal pressure gradient force in layered ocean models is a challenging task, with non-trivial interactions between the thermodynamics of the fluid and the geometry of the layers often leading to numerical difficulties. We present two new finite-volume schemes for the pressure gradient operator designed to address these issues. In each case, the horizontal acceleration is computed as an integration of the contact pressure force that acts along the perimeter of an associated momentum control-volume. A pair of new schemes are developed by exploring different control-volume geometries. Non-linearities in the underlying equation-of-state definitions and thermodynamic profiles are treated using a high-order accurate numerical integration framework, designed to preserve hydrostatic balance in a non-linear manner. Numerical experiments show that the new methods achieve high levels of consistency, maintaining hydrostatic and thermobaric equilibrium in the presence of strongly-sloping layer geometries, non-linear equations-of-state and non-uniform vertical stratification profiles. These results suggest that the new pressure gradient formulations may be appropriate for general circulation models that employ hybrid vertical coordinates and/or terrain-following representations.

  6. Rapid separation and characterization of diterpenoid alkaloids in processed roots of Aconitum carmichaeli using ultra high performance liquid chromatography coupled with hybrid linear ion trap-Orbitrap tandem mass spectrometry.

    PubMed

    Xu, Wen; Zhang, Jing; Zhu, Dayuan; Huang, Juan; Huang, Zhihai; Bai, Junqi; Qiu, Xiaohui

    2014-10-01

    The lateral root of Aconitum carmichaeli, a popular traditional Chinese medicine, has been widely used to treat rheumatic diseases. For decades, diterpenoid alkaloids have dominated the phytochemical and biomedical research on this plant. In this study, a rapid and sensitive method based on ultra high performance liquid chromatography coupled with linear ion trap-Orbitrap tandem mass spectrometry was developed to characterize the diterpenoid alkaloids in Aconitum carmichaeli. Based on an optimized chromatographic condition, more than 120 diterpenoid alkaloids were separated with good resolution. Using a systematic strategy that combines high resolution separation, highly accurate mass measurements and a good understanding of the diagnostic fragment-based fragmentation patterns, these diterpenoid alkaloids were identified or tentatively identified. The identification of these chemicals provided essential data for further phytochemical studies and toxicity research of Aconitum carmichaeli. Moreover, the ultra high performance liquid chromatography with linear ion trap-Orbitrap mass spectrometry platform was an effective and accurate tool for rapid qualitative analysis of secondary metabolite productions from natural resources. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Estimation of stature using anthropometry of feet and footprints in a Western Australian population.

    PubMed

    Hemy, Naomi; Flavel, Ambika; Ishak, Nur-Intaniah; Franklin, Daniel

    2013-07-01

    The aim of the study is to develop accurate stature estimation models for a contemporary Western Australian population from measurements of the feet and footprints. The sample comprises 200 adults (90 males, 110 females). A stature measurement, three linear measurements from each foot and bilateral footprints were collected from each subject. Seven linear measurements were then extracted from each print. Prior to data collection, a precision test was conducted to determine the repeatability of measurement acquisition. The primary data were then analysed using a range of parametric statistical tests. Results show that all foot and footprint measurements were significantly (P < 0.01-0.001) correlated with stature and estimation models were formulated with a prediction accuracy of ± 4.673 cm to ± 6.926 cm. Left foot length was the most accurate single variable in the simple linear regressions (males: ± 5.065 cm; females: ± 4.777 cm). This study provides viable alternatives for estimating stature in a Western Australian population that are equivalent to established standards developed from foot bones. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  8. High Resolution, Large Deformation 3D Traction Force Microscopy

    PubMed Central

    López-Fagundo, Cristina; Reichner, Jonathan; Hoffman-Kim, Diane; Franck, Christian

    2014-01-01

    Traction Force Microscopy (TFM) is a powerful approach for quantifying cell-material interactions that over the last two decades has contributed significantly to our understanding of cellular mechanosensing and mechanotransduction. In addition, recent advances in three-dimensional (3D) imaging and traction force analysis (3D TFM) have highlighted the significance of the third dimension in influencing various cellular processes. Yet irrespective of dimensionality, almost all TFM approaches have relied on a linear elastic theory framework to calculate cell surface tractions. Here we present a new high resolution 3D TFM algorithm which utilizes a large deformation formulation to quantify cellular displacement fields with unprecedented resolution. The results feature some of the first experimental evidence that cells are indeed capable of exerting large material deformations, which require the formulation of a new theoretical TFM framework to accurately calculate the traction forces. Based on our previous 3D TFM technique, we reformulate our approach to accurately account for large material deformation and quantitatively contrast and compare both linear and large deformation frameworks as a function of the applied cell deformation. Particular attention is paid in estimating the accuracy penalty associated with utilizing a traditional linear elastic approach in the presence of large deformation gradients. PMID:24740435

  9. Daily Suspended Sediment Discharge Prediction Using Multiple Linear Regression and Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Uca; Toriman, Ekhwan; Jaafar, Othman; Maru, Rosmini; Arfan, Amal; Saleh Ahmar, Ansari

    2018-01-01

    Prediction of suspended sediment discharge in a catchments area is very important because it can be used to evaluation the erosion hazard, management of its water resources, water quality, hydrology project management (dams, reservoirs, and irrigation) and to determine the extent of the damage that occurred in the catchments. Multiple Linear Regression analysis and artificial neural network can be used to predict the amount of daily suspended sediment discharge. Regression analysis using the least square method, whereas artificial neural networks using Radial Basis Function (RBF) and feedforward multilayer perceptron with three learning algorithms namely Levenberg-Marquardt (LM), Scaled Conjugate Descent (SCD) and Broyden-Fletcher-Goldfarb-Shanno Quasi-Newton (BFGS). The number neuron of hidden layer is three to sixteen, while in output layer only one neuron because only one output target. The mean absolute error (MAE), root mean square error (RMSE), coefficient of determination (R2 ) and coefficient of efficiency (CE) of the multiple linear regression (MLRg) value Model 2 (6 input variable independent) has the lowest the value of MAE and RMSE (0.0000002 and 13.6039) and highest R2 and CE (0.9971 and 0.9971). When compared between LM, SCG and RBF, the BFGS model structure 3-7-1 is the better and more accurate to prediction suspended sediment discharge in Jenderam catchment. The performance value in testing process, MAE and RMSE (13.5769 and 17.9011) is smallest, meanwhile R2 and CE (0.9999 and 0.9998) is the highest if it compared with the another BFGS Quasi-Newton model (6-3-1, 9-10-1 and 12-12-1). Based on the performance statistics value, MLRg, LM, SCG, BFGS and RBF suitable and accurately for prediction by modeling the non-linear complex behavior of suspended sediment responses to rainfall, water depth and discharge. The comparison between artificial neural network (ANN) and MLRg, the MLRg Model 2 accurately for to prediction suspended sediment discharge (kg/day) in Jenderan catchment area.

  10. Monitoring diesel particulate matter and calculating diesel particulate densities using Grimm model 1.109 real-time aerosol monitors in underground mines.

    PubMed

    Kimbal, Kyle C; Pahler, Leon; Larson, Rodney; VanDerslice, Jim

    2012-01-01

    Currently, there is no Mine Safety and Health Administration (MSHA)-approved sampling method that provides real-time results for ambient concentrations of diesel particulates. This study investigated whether a commercially available aerosol spectrometer, the Grimm Portable Aerosol Spectrometer Model 1.109, could be used during underground mine operations to provide accurate real-time diesel particulate data relative to MSHA-approved cassette-based sampling methods. A subset was to estimate size-specific diesel particle densities to potentially improve the diesel particulate concentration estimates using the aerosol monitor. Concurrent sampling was conducted during underground metal mine operations using six duplicate diesel particulate cassettes, according to the MSHA-approved method, and two identical Grimm Model 1.109 instruments. Linear regression was used to develop adjustment factors relating the Grimm results to the average of the cassette results. Statistical models using the Grimm data produced predicted diesel particulate concentrations that highly correlated with the time-weighted average cassette results (R(2) = 0.86, 0.88). Size-specific diesel particulate densities were not constant over the range of particle diameters observed. The variance of the calculated diesel particulate densities by particle diameter size supports the current understanding that diesel emissions are a mixture of particulate aerosols and a complex host of gases and vapors not limited to elemental and organic carbon. Finally, diesel particulate concentrations measured by the Grimm Model 1.109 can be adjusted to provide sufficiently accurate real-time air monitoring data for an underground mining environment.

  11. Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Gao, Junfeng; Liao, Wenzhi; Nuyttens, David; Lootens, Peter; Vangeyte, Jürgen; Pižurica, Aleksandra; He, Yong; Pieters, Jan G.

    2018-05-01

    The developments in the use of unmanned aerial vehicles (UAVs) and advanced imaging sensors provide new opportunities for ultra-high resolution (e.g., less than a 10 cm ground sampling distance (GSD)) crop field monitoring and mapping in precision agriculture applications. In this study, we developed a strategy for inter- and intra-row weed detection in early season maize fields from aerial visual imagery. More specifically, the Hough transform algorithm (HT) was applied to the orthomosaicked images for inter-row weed detection. A semi-automatic Object-Based Image Analysis (OBIA) procedure was developed with Random Forests (RF) combined with feature selection techniques to classify soil, weeds and maize. Furthermore, the two binary weed masks generated from HT and OBIA were fused for accurate binary weed image. The developed RF classifier was evaluated by 5-fold cross validation, and it obtained an overall accuracy of 0.945, and Kappa value of 0.912. Finally, the relationship of detected weeds and their ground truth densities was quantified by a fitted linear model with a coefficient of determination of 0.895 and a root mean square error of 0.026. Besides, the importance of input features was evaluated, and it was found that the ratio of vegetation length and width was the most significant feature for the classification model. Overall, our approach can yield a satisfactory weed map, and we expect that the obtained accurate and timely weed map from UAV imagery will be applicable to realize site-specific weed management (SSWM) in early season crop fields for reducing spraying non-selective herbicides and costs.

  12. Loss of life expectancy derived from a standardized mortality ratio in Denmark, Finland, Norway and Sweden.

    PubMed

    Skriver, Mette Vinther; Væth, Michael; Støvring, Henrik

    2018-01-01

    The standardized mortality ratio (SMR) is a widely used measure. A recent methodological study provided an accurate approximate relationship between an SMR and difference in lifetime expectancies. This study examines the usefulness of the theoretical relationship, when comparing historic mortality data in four Scandinavian populations. For Denmark, Finland, Norway and Sweden, data on mortality every fifth year in the period 1950 to 2010 were obtained. Using 1980 as the reference year, SMRs and difference in life expectancy were calculated. The assumptions behind the theoretical relationship were examined graphically. The theoretical relationship predicts a linear association with a slope, [Formula: see text], between log(SMR) and difference in life expectancies, and the theoretical prediction and calculated differences in lifetime expectancies were compared. We examined the linear association both for life expectancy at birth and at age 30. All analyses were done for females, males and the total population. The approximate relationship provided accurate predictions of actual differences in lifetime expectancies. The accuracy of the predictions was better when age was restricted to above 30, and improved if the changes in mortality rate were close to a proportional change. Slopes of the linear relationship were generally around 9 for females and 10 for males. The theoretically derived relationship between SMR and difference in life expectancies provides an accurate prediction for comparing populations with approximately proportional differences in mortality, and was relatively robust. The relationship may provide a useful prediction of differences in lifetime expectancies, which can be more readily communicated and understood.

  13. A Sub-Millimetric 3-DOF Force Sensing Instrument with Integrated Fiber Bragg Grating for Retinal Microsurgery

    PubMed Central

    He, Xingchi; Handa, James; Gehlbach, Peter; Taylor, Russell; Iordachita, Iulian

    2013-01-01

    Vitreoretinal surgery requires very fine motor control to perform precise manipulation of the delicate tissue in the interior of the eye. Besides physiological hand tremor, fatigue, poor kinesthetic feedback, and patient movement, the absence of force sensing is one of the main technical challenges. Previous two degrees of freedom (DOF) force sensing instruments have demonstrated robust force measuring performance. The main design challenge is to incorporate high sensitivity axial force sensing. This paper reports the development of a sub-millimetric 3-DOF force sensing pick instrument based on fiber Bragg grating (FBG) sensors. The configuration of the four FBG sensors is arranged to maximize the decoupling between axial and transverse force sensing. A super-elastic nitinol flexure is designed to achieve high axial force sensitivity. An automated calibration system was developed for repeatability testing, calibration, and validation. Experimental results demonstrate a FBG sensor repeatability of 1.3 pm. The linear model for calculating the transverse forces provides an accurate global estimate. While the linear model for axial force is only locally accurate within a conical region with a 30° vertex angle, a second-order polynomial model can provide a useful global estimate for axial force. Combining the linear model for transverse forces and nonlinear model for axial force, the 3-DOF force sensing instrument can provide sub-millinewton resolution for axial force and a quarter millinewton for transverse forces. Validation with random samples show the force sensor can provide consistent and accurate measurement of three dimensional forces. PMID:24108455

  14. Evaluation of immunity to varicella zoster virus with a novel double antigen sandwich enzyme-linked immunosorbent assay.

    PubMed

    Liu, Jian; Chen, Chunye; Zhu, Rui; Ye, Xiangzhong; Jia, Jizong; Yang, Lianwei; Wang, Yongmei; Wang, Wei; Ye, Jianghui; Li, Yimin; Zhu, Hua; Zhao, Qinjian; Zhang, Jun; Cheng, Tong; Xia, Ningshao

    2016-11-01

    Varicella is a highly contagious disease caused by primary infection of Varicella zoster virus (VZV). Varicella can be severe or even lethal in susceptible adults, immunocompromised patients and neonates. Determination of the status of immunity to VZV is recommended for these high-risk populations. Furthermore, measurement of population immunity to VZV can help in developing proper varicella vaccination programmes. VZV glycoprotein E (gE) is an antigen that has been demonstrated to be a highly accurate indicator of VZV-specific immunity. In this study, recombinant gE (rgE) was used to establish a double antigen sandwich enzyme-linked immunosorbent assay (ELISA). The established sandwich ELISA showed high specificity and sensitivity in the detection of human sera, and it could detect VZV-specific antibodies at a concentration of 11.25 m IU/mL with a detection linearity interval of 11.25 to 360 m IU/mL (R 2  = 0.9985). The double gE antigen sandwich ELISA showed a sensitivity of 95.08 % and specificity of 100 % compared to the fluorescent-antibody-to-membrane-antigen (FAMA) test, and it showed a sensitivity of 100 % and a specificity of 94.74 % compared to a commercial neutralizing antibody detection kit. Thus, the established double antigen sandwich ELISA can be used as a sensitive and specific quantitative method to evaluate immunity to VZV.

  15. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    NASA Astrophysics Data System (ADS)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  16. Parameterizing sorption isotherms using a hybrid global-local fitting procedure.

    PubMed

    Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J

    2017-05-01

    Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A simplified competition data analysis for radioligand specific activity determination.

    PubMed

    Venturino, A; Rivera, E S; Bergoc, R M; Caro, R A

    1990-01-01

    Non-linear regression and two-step linear fit methods were developed to determine the actual specific activity of 125I-ovine prolactin by radioreceptor self-displacement analysis. The experimental results obtained by the different methods are superposable. The non-linear regression method is considered to be the most adequate procedure to calculate the specific activity, but if its software is not available, the other described methods are also suitable.

  18. A fully non-linear multi-species Fokker–Planck–Landau collision operator for simulation of fusion plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov

    2016-06-15

    Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less

  19. A fully non-linear multi-species Fokker–Planck–Landau collision operator for simulation of fusion plasma

    DOE PAGES

    Hager, Robert; Yoon, E. S.; Ku, S.; ...

    2016-04-04

    Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less

  20. Practical Methodology for the Inclusion of Nonlinear Slosh Damping in the Stability Analysis of Liquid-Propelled Space Vehicles

    NASA Technical Reports Server (NTRS)

    Ottander, John A.; Hall, Robert A.; Powers, J. F.

    2018-01-01

    A method is presented that allows for the prediction of the magnitude of limit cycles due to adverse control-slosh interaction in liquid propelled space vehicles using non-linear slosh damping. Such a method is an alternative to the industry practice of assuming linear damping and relying on: mechanical slosh baffles to achieve desired stability margins; accepting minimal slosh stability margins; or time domain non-linear analysis to accept time periods of poor stability. Sinusoidal input describing functional analysis is used to develop a relationship between the non-linear slosh damping and an equivalent linear damping at a given slosh amplitude. In addition, a more accurate analytical prediction of the danger zone for slosh mass locations in a vehicle under proportional and derivative attitude control is presented. This method is used in the control-slosh stability analysis of the NASA Space Launch System.

  1. Robot Arm with Tendon Connector Plate and Linear Actuator

    NASA Technical Reports Server (NTRS)

    Bridgwater, Lyndon (Inventor); Millerman, Alexander (Inventor); Ihrke, Chris A. (Inventor); Diftler, Myron A. (Inventor); Nguyen, Vienny (Inventor)

    2014-01-01

    A robotic system includes a tendon-driven end effector, a linear actuator, a flexible tendon, and a plate assembly. The linear actuator assembly has a servo motor and a drive mechanism, the latter of which translates linearly with respect to a drive axis of the servo motor in response to output torque from the servo motor. The tendon connects to the end effector and drive mechanism. The plate assembly is disposed between the linear actuator assembly and the tendon-driven end effector and includes first and second plates. The first plate has a first side that defines a boss with a center opening. The second plate defines an accurate through-slot having tendon guide channels. The first plate defines a through passage for the tendon between the center opening and a second side of the first plate. A looped end of the flexible tendon is received within the tendon guide channels.

  2. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  3. Piecewise linear emulator of the nonlinear Schrödinger equation and the resulting analytic solutions for Bose-Einstein condensates.

    PubMed

    Theodorakis, Stavros

    2003-06-01

    We emulate the cubic term Psi(3) in the nonlinear Schrödinger equation by a piecewise linear term, thus reducing the problem to a set of uncoupled linear inhomogeneous differential equations. The resulting analytic expressions constitute an excellent approximation to the exact solutions, as is explicitly shown in the case of the kink, the vortex, and a delta function trap. Such a piecewise linear emulation can be used for any differential equation where the only nonlinearity is a Psi(3) one. In particular, it can be used for the nonlinear Schrödinger equation in the presence of harmonic traps, giving analytic Bose-Einstein condensate solutions that reproduce very accurately the numerically calculated ones in one, two, and three dimensions.

  4. Efficient hybrid-symbolic methods for quantum mechanical calculations

    NASA Astrophysics Data System (ADS)

    Scott, T. C.; Zhang, Wenxing

    2015-06-01

    We present hybrid symbolic-numerical tools to generate optimized numerical code for rapid prototyping and fast numerical computation starting from a computer algebra system (CAS) and tailored to any given quantum mechanical problem. Although a major focus concerns the quantum chemistry methods of H. Nakatsuji which has yielded successful and very accurate eigensolutions for small atoms and molecules, the tools are general and may be applied to any basis set calculation with a variational principle applied to its linear and non-linear parameters.

  5. MIKON 94. International Microwave Conference (10th) Held in Ksiaz, Poland on May 30-June 2, 1994. Volume 3. Invited Papers

    DTIC Science & Technology

    1994-01-01

    linear non -differential equations in series. This makes it easier to control the result, and an exact and accurate solution is obtained without...battery operated and controlled by an industry standard computer 1161. The HF unit contains a step-recovery diode transmitter and two quasi -TEM antennas...16]. All of these procedures can take advantage of exact non -linear analysis or experimental power characterization and are therefore "full non

  6. Nonferromagnetic linear variable differential transformer

    DOEpatents

    Ellis, James F.; Walstrom, Peter L.

    1977-06-14

    A nonferromagnetic linear variable differential transformer for accurately measuring mechanical displacements in the presence of high magnetic fields is provided. The device utilizes a movable primary coil inside a fixed secondary coil that consists of two series-opposed windings. Operation is such that the secondary output voltage is maintained in phase (depending on polarity) with the primary voltage. The transducer is well-suited to long cable runs and is useful for measuring small displacements in the presence of high or alternating magnetic fields.

  7. Attitude estimation of earth orbiting satellites by decomposed linear recursive filters

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1975-01-01

    Attitude estimation of earth orbiting satellites (including Large Space Telescope) subjected to environmental disturbances and noises was investigated. Modern control and estimation theory is used as a tool to design an efficient estimator for attitude estimation. Decomposed linear recursive filters for both continuous-time systems and discrete-time systems are derived. By using this accurate estimation of the attitude of spacecrafts, state variable feedback controller may be designed to achieve (or satisfy) high requirements of system performance.

  8. Conformational free energy of melts of ring-linear polymer blends.

    PubMed

    Subramanian, Gopinath; Shanbhag, Sachin

    2009-10-01

    The conformational free energy of ring polymers in a blend of ring and linear polymers is investigated using the bond-fluctuation model. Previously established scaling relationships for the free energy of a ring polymer are shown to be valid only in the mean-field sense, and alternative functional forms are investigated. It is shown that it may be difficult to accurately express the total free energy of a ring polymer by a simple scaling argument, or in closed form.

  9. Experimental Analysis of the Vorticity and Turbulent Flow Dynamics of a Pitching Airfoil at Realistic Flight Conditions

    DTIC Science & Technology

    2007-08-31

    Element type Hex, independent meshing, Linear 3D stress Hex, independent meshing, Linear 3D stress 1 English Units were used in ABAQUS The NACA...Flow Freestream Condition Instrumentation Test section conditions were measured using a Druck DPI 203 digital pressure gage and an Omega Model 199...temperature gage. The Druck pressure gage measures the set dynamic pressure within 0.08%± of full scale, and the Omega thermometer is accurate to

  10. Numerical calculations of two dimensional, unsteady transonic flows with circulation

    NASA Technical Reports Server (NTRS)

    Beam, R. M.; Warming, R. F.

    1974-01-01

    The feasibility of obtaining two-dimensional, unsteady transonic aerodynamic data by numerically integrating the Euler equations is investigated. An explicit, third-order-accurate, noncentered, finite-difference scheme is used to compute unsteady flows about airfoils. Solutions for lifting and nonlifting airfoils are presented and compared with subsonic linear theory. The applicability and efficiency of the numerical indicial function method are outlined. Numerically computed subsonic and transonic oscillatory aerodynamic coefficients are presented and compared with those obtained from subsonic linear theory and transonic wind-tunnel data.

  11. A novel simple QSAR model for the prediction of anti-HIV activity using multiple linear regression analysis.

    PubMed

    Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga

    2006-08-01

    A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.

  12. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.

  13. Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim

    2014-11-01

    Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).

  14. Determination of dasatinib in the tablet dosage form by ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis.

    PubMed

    Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel

    2017-01-01

    Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Simulating flow in karst aquifers at laboratory and sub-regional scales using MODFLOW-CFP

    NASA Astrophysics Data System (ADS)

    Gallegos, Josue Jacob; Hu, Bill X.; Davis, Hal

    2013-12-01

    Groundwater flow in a well-developed karst aquifer dominantly occurs through bedding planes, fractures, conduits, and caves created by and/or enlarged by dissolution. Conventional groundwater modeling methods assume that groundwater flow is described by Darcian principles where primary porosity (i.e. matrix porosity) and laminar flow are dominant. However, in well-developed karst aquifers, the assumption of Darcian flow can be questionable. While Darcian flow generally occurs in the matrix portion of the karst aquifer, flow through conduits can be non-laminar where the relation between specific discharge and hydraulic gradient is non-linear. MODFLOW-CFP is a relatively new modeling program that accounts for non-laminar and laminar flow in pipes, like karst caves, within an aquifer. In this study, results from MODFLOW-CFP are compared to those from MODFLOW-2000/2005, a numerical code based on Darcy's law, to evaluate the accuracy that CFP can achieve when modeling flows in karst aquifers at laboratory and sub-regional (Woodville Karst Plain, Florida, USA) scales. In comparison with laboratory experiments, simulation results by MODFLOW-CFP are more accurate than MODFLOW 2005. At the sub-regional scale, MODFLOW-CFP was more accurate than MODFLOW-2000 for simulating field measurements of peak flow at one spring and total discharges at two springs for an observed storm event.

  16. Fine-temporal forecasting of outbreak probability and severity: Ross River virus in Western Australia.

    PubMed

    Koolhof, I S; Bettiol, S; Carver, S

    2017-10-01

    Health warnings of mosquito-borne disease risk require forecasts that are accurate at fine-temporal resolutions (weekly scales); however, most forecasting is coarse (monthly). We use environmental and Ross River virus (RRV) surveillance to predict weekly outbreak probabilities and incidence spanning tropical, semi-arid, and Mediterranean regions of Western Australia (1991-2014). Hurdle and linear models were used to predict outbreak probabilities and incidence respectively, using time-lagged environmental variables. Forecast accuracy was assessed by model fit and cross-validation. Residual RRV notification data were also examined against mitigation expenditure for one site, Mandurah 2007-2014. Models were predictive of RRV activity, except at one site (Capel). Minimum temperature was an important predictor of RRV outbreaks and incidence at all predicted sites. Precipitation was more likely to cause outbreaks and greater incidence among tropical and semi-arid sites. While variable, mitigation expenditure coincided positively with increased RRV incidence (r 2 = 0·21). Our research demonstrates capacity to accurately predict mosquito-borne disease outbreaks and incidence at fine-temporal resolutions. We apply our findings, developing a user-friendly tool enabling managers to easily adopt this research to forecast region-specific RRV outbreaks and incidence. Approaches here may be of value to fine-scale forecasting of RRV in other areas of Australia, and other mosquito-borne diseases.

  17. Optimisation of an idealised primitive equation ocean model using stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Cooper, Fenwick C.

    2017-05-01

    Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.

  18. Software requirements specification for the GIS-T/ISTEA pooled fund study phase C linear referencing engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amai, W.; Espinoza, J. Jr.; Fletcher, D.R.

    1997-06-01

    This Software Requirements Specification (SRS) describes the features to be provided by the software for the GIS-T/ISTEA Pooled Fund Study Phase C Linear Referencing Engine project. This document conforms to the recommendations of IEEE Standard 830-1984, IEEE Guide to Software Requirements Specification (Institute of Electrical and Electronics Engineers, Inc., 1984). The software specified in this SRS is a proof-of-concept implementation of the Linear Referencing Engine as described in the GIS-T/ISTEA pooled Fund Study Phase B Summary, specifically Sheet 13 of the Phase B object model. The software allows an operator to convert between two linear referencing methods and a datummore » network.« less

  19. Biochemical Characterization of the Lactobacillus reuteri Glycoside Hydrolase Family 70 GTFB Type of 4,6-α-Glucanotransferase Enzymes That Synthesize Soluble Dietary Starch Fibers.

    PubMed

    Bai, Yuxiang; van der Kaaij, Rachel Maria; Leemhuis, Hans; Pijning, Tjaard; van Leeuwen, Sander Sebastiaan; Jin, Zhengyu; Dijkhuizen, Lubbert

    2015-10-01

    4,6-α-Glucanotransferase (4,6-α-GTase) enzymes, such as GTFB and GTFW of Lactobacillus reuteri strains, constitute a new reaction specificity in glycoside hydrolase family 70 (GH70) and are novel enzymes that convert starch or starch hydrolysates into isomalto/maltopolysaccharides (IMMPs). These IMMPs still have linear chains with some α1→4 linkages but mostly (relatively long) linear chains with α1→6 linkages and are soluble dietary starch fibers. 4,6-α-GTase enzymes and their products have significant potential for industrial applications. Here we report that an N-terminal truncation (amino acids 1 to 733) strongly enhances the soluble expression level of fully active GTFB-ΔN (approximately 75-fold compared to full-length wild type GTFB) in Escherichia coli. In addition, quantitative assays based on amylose V as the substrate are described; these assays allow accurate determination of both hydrolysis (minor) activity (glucose release, reducing power) and total activity (iodine staining) and calculation of the transferase (major) activity of these 4,6-α-GTase enzymes. The data show that GTFB-ΔN is clearly less hydrolytic than GTFW, which is also supported by nuclear magnetic resonance (NMR) analysis of their final products. From these assays, the biochemical properties of GTFB-ΔN were characterized in detail, including determination of kinetic parameters and acceptor substrate specificity. The GTFB enzyme displayed high conversion yields at relatively high substrate concentrations, a promising feature for industrial application. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  20. Validation of high throughput screening of human sera for detection of anti-PA IgG by Enzyme-Linked Immunosorbent Assay (ELISA) as an emergency response to an anthrax incident

    PubMed Central

    Semenova, Vera A.; Steward-Clark, Evelene; Maniatis, Panagiotis; Epperson, Monica; Sabnis, Amit; Schiffer, Jarad

    2017-01-01

    To improve surge testing capability for a response to a release of Bacillus anthracis, the CDC anti-Protective Antigen (PA) IgG Enzyme-Linked Immunosorbent Assay (ELISA) was re-designed into a high throughput screening format. The following assay performance parameters were evaluated: goodness of fit (measured as the mean reference standard r2), accuracy (measured as percent error), precision (measured as coefficient of variance (CV)), lower limit of detection (LLOD), lower limit of quantification (LLOQ), dilutional linearity, diagnostic sensitivity (DSN) and diagnostic specificity (DSP). The paired sets of data for each sample were evaluated by Concordance Correlation Coefficient (CCC) analysis. The goodness of fit was 0.999; percent error between the expected and observed concentration for each sample ranged from −4.6% to 14.4%. The coefficient of variance ranged from 9.0% to 21.2%. The assay LLOQ was 2.6 μg/mL. The regression analysis results for dilutional linearity data were r2 = 0.952, slope = 1.02 and intercept = −0.03. CCC between assays was 0.974 for the median concentration of serum samples. The accuracy and precision components of CCC were 0.997 and 0.977, respectively. This high throughput screening assay is precise, accurate, sensitive and specific. Anti-PA IgG concentrations determined using two different assays proved high levels of agreement. The method will improve surge testing capability 18-fold from 4 to 72 sera per assay plate. PMID:27814939

  1. SU-F-T-262: Commissioning Varian Portal Dosimetry for EPID-Based Patient Specific QA in a Non-Aria Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, M; Knutson, N; University of Rhode Island, Kingston, RI

    2016-06-15

    Purpose: Development of an in-house program facilitates a workflow that allows Electronic Portal Imaging Device (EPID) patient specific quality assurance (QA) measurements to be acquired and analyzed in the Portal Dosimetry Application (Varian Medical Systems, Palo Alto, CA) using a non-Aria Record and Verify (R&V) system (MOSAIQ, Elekta, Crawley, UK) to deliver beams in standard clinical treatment mode. Methods: Initial calibration of an in-house software tool includes characterization of EPID dosimetry parameters by importing DICOM images of varying delivered MUs to determine linear mapping factors in order to convert image pixel values to Varian-defined Calibrated Units (CU). Using this information,more » the Portal Dose Image Prediction (PDIP) algorithm was commissioned by converting images of various field sizes to output factors using the Eclipse Scripting Application Programming Interface (ESAPI) and converting a delivered configuration fluence to absolute dose units. To verify the algorithm configuration, an integrated image was acquired, exported directly from the R&V client, automatically converted to a compatible, calibrated dosimetric image, and compared to a PDIP calculated image using Varian’s Portal Dosimetry Application. Results: For two C-Series and one TrueBeam Varian linear accelerators, gamma comparisons (global 3% / 3mm) of PDIP algorithm predicted dosimetric images and images converted via the inhouse system demonstrated agreement for ≥99% of all pixels, exceeding vendor-recommended commissioning guidelines. Conclusion: Combinations of a programmatic image conversion tool and ESAPI allow for an efficient and accurate method of patient IMRT QA incorporating a 3rd party R&V system.« less

  2. Validation of high throughput screening of human sera for detection of anti-PA IgG by Enzyme-Linked Immunosorbent Assay (ELISA) as an emergency response to an anthrax incident.

    PubMed

    Semenova, Vera A; Steward-Clark, Evelene; Maniatis, Panagiotis; Epperson, Monica; Sabnis, Amit; Schiffer, Jarad

    2017-01-01

    To improve surge testing capability for a response to a release of Bacillus anthracis, the CDC anti-Protective Antigen (PA) IgG Enzyme-Linked Immunosorbent Assay (ELISA) was re-designed into a high throughput screening format. The following assay performance parameters were evaluated: goodness of fit (measured as the mean reference standard r 2 ), accuracy (measured as percent error), precision (measured as coefficient of variance (CV)), lower limit of detection (LLOD), lower limit of quantification (LLOQ), dilutional linearity, diagnostic sensitivity (DSN) and diagnostic specificity (DSP). The paired sets of data for each sample were evaluated by Concordance Correlation Coefficient (CCC) analysis. The goodness of fit was 0.999; percent error between the expected and observed concentration for each sample ranged from -4.6% to 14.4%. The coefficient of variance ranged from 9.0% to 21.2%. The assay LLOQ was 2.6 μg/mL. The regression analysis results for dilutional linearity data were r 2  = 0.952, slope = 1.02 and intercept = -0.03. CCC between assays was 0.974 for the median concentration of serum samples. The accuracy and precision components of CCC were 0.997 and 0.977, respectively. This high throughput screening assay is precise, accurate, sensitive and specific. Anti-PA IgG concentrations determined using two different assays proved high levels of agreement. The method will improve surge testing capability 18-fold from 4 to 72 sera per assay plate. Published by Elsevier Ltd.

  3. Muscle Synergies Facilitate Computational Prediction of Subject-Specific Walking Motions

    PubMed Central

    Meyer, Andrew J.; Eskinazi, Ilan; Jackson, Jennifer N.; Rao, Anil V.; Patten, Carolynn; Fregly, Benjamin J.

    2016-01-01

    Researchers have explored a variety of neurorehabilitation approaches to restore normal walking function following a stroke. However, there is currently no objective means for prescribing and implementing treatments that are likely to maximize recovery of walking function for any particular patient. As a first step toward optimizing neurorehabilitation effectiveness, this study develops and evaluates a patient-specific synergy-controlled neuromusculoskeletal simulation framework that can predict walking motions for an individual post-stroke. The main question we addressed was whether driving a subject-specific neuromusculoskeletal model with muscle synergy controls (5 per leg) facilitates generation of accurate walking predictions compared to a model driven by muscle activation controls (35 per leg) or joint torque controls (5 per leg). To explore this question, we developed a subject-specific neuromusculoskeletal model of a single high-functioning hemiparetic subject using instrumented treadmill walking data collected at the subject’s self-selected speed of 0.5 m/s. The model included subject-specific representations of lower-body kinematic structure, foot–ground contact behavior, electromyography-driven muscle force generation, and neural control limitations and remaining capabilities. Using direct collocation optimal control and the subject-specific model, we evaluated the ability of the three control approaches to predict the subject’s walking kinematics and kinetics at two speeds (0.5 and 0.8 m/s) for which experimental data were available from the subject. We also evaluated whether synergy controls could predict a physically realistic gait period at one speed (1.1 m/s) for which no experimental data were available. All three control approaches predicted the subject’s walking kinematics and kinetics (including ground reaction forces) well for the model calibration speed of 0.5 m/s. However, only activation and synergy controls could predict the subject’s walking kinematics and kinetics well for the faster non-calibration speed of 0.8 m/s, with synergy controls predicting the new gait period the most accurately. When used to predict how the subject would walk at 1.1 m/s, synergy controls predicted a gait period close to that estimated from the linear relationship between gait speed and stride length. These findings suggest that our neuromusculoskeletal simulation framework may be able to bridge the gap between patient-specific muscle synergy information and resulting functional capabilities and limitations. PMID:27790612

  4. Disease gene prioritization by integrating tissue-specific molecular networks using a robust multi-network model.

    PubMed

    Ni, Jingchao; Koyuturk, Mehmet; Tong, Hanghang; Haines, Jonathan; Xu, Rong; Zhang, Xiang

    2016-11-10

    Accurately prioritizing candidate disease genes is an important and challenging problem. Various network-based methods have been developed to predict potential disease genes by utilizing the disease similarity network and molecular networks such as protein interaction or gene co-expression networks. Although successful, a common limitation of the existing methods is that they assume all diseases share the same molecular network and a single generic molecular network is used to predict candidate genes for all diseases. However, different diseases tend to manifest in different tissues, and the molecular networks in different tissues are usually different. An ideal method should be able to incorporate tissue-specific molecular networks for different diseases. In this paper, we develop a robust and flexible method to integrate tissue-specific molecular networks for disease gene prioritization. Our method allows each disease to have its own tissue-specific network(s). We formulate the problem of candidate gene prioritization as an optimization problem based on network propagation. When there are multiple tissue-specific networks available for a disease, our method can automatically infer the relative importance of each tissue-specific network. Thus it is robust to the noisy and incomplete network data. To solve the optimization problem, we develop fast algorithms which have linear time complexities in the number of nodes in the molecular networks. We also provide rigorous theoretical foundations for our algorithms in terms of their optimality and convergence properties. Extensive experimental results show that our method can significantly improve the accuracy of candidate gene prioritization compared with the state-of-the-art methods. In our experiments, we compare our methods with 7 popular network-based disease gene prioritization algorithms on diseases from Online Mendelian Inheritance in Man (OMIM) database. The experimental results demonstrate that our methods recover true associations more accurately than other methods in terms of AUC values, and the performance differences are significant (with paired t-test p-values less than 0.05). This validates the importance to integrate tissue-specific molecular networks for studying disease gene prioritization and show the superiority of our network models and ranking algorithms toward this purpose. The source code and datasets are available at http://nijingchao.github.io/CRstar/ .

  5. Validation of the Filovirus Plaque Assay for Use in Preclinical Studies

    PubMed Central

    Shurtleff, Amy C.; Bloomfield, Holly A.; Mort, Shannon; Orr, Steven A.; Audet, Brian; Whitaker, Thomas; Richards, Michelle J.; Bavari, Sina

    2016-01-01

    A plaque assay for quantitating filoviruses in virus stocks, prepared viral challenge inocula and samples from research animals has recently been fully characterized and standardized for use across multiple institutions performing Biosafety Level 4 (BSL-4) studies. After standardization studies were completed, Good Laboratory Practices (GLP)-compliant plaque assay method validation studies to demonstrate suitability for reliable and reproducible measurement of the Marburg Virus Angola (MARV) variant and Ebola Virus Kikwit (EBOV) variant commenced at the United States Army Medical Research Institute of Infectious Diseases (USAMRIID). The validation parameters tested included accuracy, precision, linearity, robustness, stability of the virus stocks and system suitability. The MARV and EBOV assays were confirmed to be accurate to ±0.5 log10 PFU/mL. Repeatability precision, intermediate precision and reproducibility precision were sufficient to return viral titers with a coefficient of variation (%CV) of ≤30%, deemed acceptable variation for a cell-based bioassay. Intraclass correlation statistical techniques for the evaluation of the assay’s precision when the same plaques were quantitated by two analysts returned values passing the acceptance criteria, indicating high agreement between analysts. The assay was shown to be accurate and specific when run on Nonhuman Primates (NHP) serum and plasma samples diluted in plaque assay medium, with negligible matrix effects. Virus stocks demonstrated stability for freeze-thaw cycles typical of normal usage during assay retests. The results demonstrated that the EBOV and MARV plaque assays are accurate, precise and robust for filovirus titration in samples associated with the performance of GLP animal model studies. PMID:27110807

  6. Development and Validation of Liquid Chromatographic Method for Estimation of Naringin in Nanoformulation

    PubMed Central

    Musmade, Kranti P.; Trilok, M.; Dengale, Swapnil J.; Bhat, Krishnamurthy; Reddy, M. S.; Musmade, Prashant B.; Udupa, N.

    2014-01-01

    A simple, precise, accurate, rapid, and sensitive reverse phase high performance liquid chromatography (RP-HPLC) method with UV detection has been developed and validated for quantification of naringin (NAR) in novel pharmaceutical formulation. NAR is a polyphenolic flavonoid present in most of the citrus plants having variety of pharmacological activities. Method optimization was carried out by considering the various parameters such as effect of pH and column. The analyte was separated by employing a C18 (250.0 × 4.6 mm, 5 μm) column at ambient temperature in isocratic conditions using phosphate buffer pH 3.5: acetonitrile (75 : 25% v/v) as mobile phase pumped at a flow rate of 1.0 mL/min. UV detection was carried out at 282 nm. The developed method was validated according to ICH guidelines Q2(R1). The method was found to be precise and accurate on statistical evaluation with a linearity range of 0.1 to 20.0 μg/mL for NAR. The intra- and interday precision studies showed good reproducibility with coefficients of variation (CV) less than 1.0%. The mean recovery of NAR was found to be 99.33 ± 0.16%. The proposed method was found to be highly accurate, sensitive, and robust. The proposed liquid chromatographic method was successfully employed for the routine analysis of said compound in developed novel nanopharmaceuticals. The presence of excipients did not show any interference on the determination of NAR, indicating method specificity. PMID:26556205

  7. An extinction scale-expansion unit for the Beckman DK2 spectrophotometer

    PubMed Central

    Dixon, M.

    1967-01-01

    The paper describes a simple but accurate unit for the Beckman DK2 recording spectrophotometer, whereby any 0·1 section of the extinction (`absorbance') scale may be expanded tenfold, while preserving complete linearity in extinction. PMID:6048800

  8. Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties

    PubMed Central

    Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon

    2014-01-01

    Purpose The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency during anterior-posterior stretching. Method Three materially linear and three materially nonlinear models were created and stretched up to 10 mm in 1 mm increments. Phonation onset pressure (Pon) and fundamental frequency (F0) at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1 mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Results Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Conclusions Nonlinear synthetic models appear to more accurately represent the human vocal folds than linear models, especially with respect to F0 response. PMID:22271874

  9. Frequency response of synthetic vocal fold models with linear and nonlinear material properties.

    PubMed

    Shaw, Stephanie M; Thomson, Scott L; Dromey, Christopher; Smith, Simeon

    2012-10-01

    The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F0) during anterior-posterior stretching. Three materially linear and 3 materially nonlinear models were created and stretched up to 10 mm in 1-mm increments. Phonation onset pressure (Pon) and F0 at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1-mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Nonlinear synthetic models appear to more accurately represent the human vocal folds than do linear models, especially with respect to F0 response.

  10. Mandibular canine: A tool for sex identification in forensic odontology.

    PubMed

    Kumawat, Ramniwas M; Dindgire, Sarika L; Gadhari, Mangesh; Khobragade, Pratima G; Kadoo, Priyanka S; Yadav, Pradeep

    2017-01-01

    The aim of this study was to investigate the accuracy of mandibular canine index (MCI) and mandibular mesiodistal odontometrics in sex identification in the age group of 17-25 years in central Indian population. The study sample comprised total 300 individuals (150 males and 150 females) of an age group ranging from 17 to 25 years of central Indian population. The maximum mesiodistal diameter of mandibular canines, the linear distance between the tips of mandibular canines, was measured using digital vernier caliper on the study models. Overall sex could be predicted accurately in 79.66% (81.33% males and 78% females) of the population by MCI. Whereas, considering the mandibular canine width for sex identification, the overall accuracy was 75% for the right mandibular canine and 73% for the left mandibular canine observed. Sexual dimorphism of canine is population specific, and among the Indian population, MCI and mesiodistal dimension of mandibular canine can aid in sex determination.

  11. On the Predictability of Future Impact in Science

    PubMed Central

    Penner, Orion; Pan, Raj K.; Petersen, Alexander M.; Kaski, Kimmo; Fortunato, Santo

    2013-01-01

    Correctly assessing a scientist's past research impact and potential for future impact is key in recruitment decisions and other evaluation processes. While a candidate's future impact is the main concern for these decisions, most measures only quantify the impact of previous work. Recently, it has been argued that linear regression models are capable of predicting a scientist's future impact. By applying that future impact model to 762 careers drawn from three disciplines: physics, biology, and mathematics, we identify a number of subtle, but critical, flaws in current models. Specifically, cumulative non-decreasing measures like the h-index contain intrinsic autocorrelation, resulting in significant overestimation of their “predictive power”. Moreover, the predictive power of these models depend heavily upon scientists' career age, producing least accurate estimates for young researchers. Our results place in doubt the suitability of such models, and indicate further investigation is required before they can be used in recruiting decisions. PMID:24165898

  12. volBrain: An Online MRI Brain Volumetry System

    PubMed Central

    Manjón, José V.; Coupé, Pierrick

    2016-01-01

    The amount of medical image data produced in clinical and research settings is rapidly growing resulting in vast amount of data to analyze. Automatic and reliable quantitative analysis tools, including segmentation, allow to analyze brain development and to understand specific patterns of many neurological diseases. This field has recently experienced many advances with successful techniques based on non-linear warping and label fusion. In this work we present a novel and fully automatic pipeline for volumetric brain analysis based on multi-atlas label fusion technology that is able to provide accurate volumetric information at different levels of detail in a short time. This method is available through the volBrain online web interface (http://volbrain.upv.es), which is publically and freely accessible to the scientific community. Our new framework has been compared with current state-of-the-art methods showing very competitive results. PMID:27512372

  13. Smart phone: a popular device supports amylase activity assay in fisheries research.

    PubMed

    Thongprajukaew, Karun; Choodum, Aree; Sa-E, Barunee; Hayee, Ummah

    2014-11-15

    Colourimetric determinations of amylase activity were developed based on a standard dinitrosalicylic acid (DNS) staining method, using maltose as the analyte. Intensities and absorbances of red, green and blue (RGB) were obtained with iPhone imaging and Adobe Photoshop image analysis. Correlation of green and analyte concentrations was highly significant, and the accuracy of the developed method was excellent in analytical performance. The common iPhone has sufficient imaging ability for accurate quantification of maltose concentrations. Detection limits, sensitivity and linearity were comparable to a spectrophotometric method, but provided better inter-day precision. In quantifying amylase specific activity from a commercial source (P>0.02) and fish samples (P>0.05), differences compared with spectrophotometric measurements were not significant. We have demonstrated that iPhone imaging with image analysis in Adobe Photoshop has potential for field and laboratory studies of amylase. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Arthroplasty Utilization in the United States is Predicted by Age-Specific Population Groups.

    PubMed

    Bashinskaya, Bronislava; Zimmerman, Ryan M; Walcott, Brian P; Antoci, Valentin

    2012-01-01

    Osteoarthritis is a common indication for hip and knee arthroplasty. An accurate assessment of current trends in healthcare utilization as they relate to arthroplasty may predict the needs of a growing elderly population in the United States. First, incidence data was queried from the United States Nationwide Inpatient Sample from 1993 to 2009. Patients undergoing total knee and hip arthroplasty were identified. Then, the United States Census Bureau was queried for population data from the same study period as well as to provide future projections. Arthroplasty followed linear regression models with the population group >64 years in both hip and knee groups. Projections for procedure incidence in the year 2050 based on these models were calculated to be 1,859,553 cases (hip) and 4,174,554 cases (knee). The need for hip and knee arthroplasty is expected to grow significantly in the upcoming years, given population growth predictions.

  15. Density determination of nail polishes and paint chips using magnetic levitation

    NASA Astrophysics Data System (ADS)

    Huang, Peggy P.

    Trace evidence is often small, easily overlooked, and difficult to analyze. This study describes a nondestructive method to separate and accurately determine the density of trace evidence samples, specifically nail polish and paint chip using magnetic levitation (MagLev). By determining the levitation height of each sample in the MagLev device, the density of the sample is back extrapolated using a standard density bead linear regression line. The results show that MagLev distinguishes among eight clear nail polishes, including samples from the same manufacturer; separates select colored nail polishes from the same manufacturer; can determine the density range of household paint chips; and shows limited levitation for unknown paint chips. MagLev provides a simple, affordable, and nondestructive means of determining density. The addition of co-solutes to the paramagnetic solution to expand the density range may result in greater discriminatory power and separation and lead to further applications of this technique.

  16. volBrain: An Online MRI Brain Volumetry System.

    PubMed

    Manjón, José V; Coupé, Pierrick

    2016-01-01

    The amount of medical image data produced in clinical and research settings is rapidly growing resulting in vast amount of data to analyze. Automatic and reliable quantitative analysis tools, including segmentation, allow to analyze brain development and to understand specific patterns of many neurological diseases. This field has recently experienced many advances with successful techniques based on non-linear warping and label fusion. In this work we present a novel and fully automatic pipeline for volumetric brain analysis based on multi-atlas label fusion technology that is able to provide accurate volumetric information at different levels of detail in a short time. This method is available through the volBrain online web interface (http://volbrain.upv.es), which is publically and freely accessible to the scientific community. Our new framework has been compared with current state-of-the-art methods showing very competitive results.

  17. Nonlinear Analyte Concentration Gradients for One-Step Kinetic Analysis Employing Optical Microring Resonators

    PubMed Central

    Marty, Michael T.; Kuhnline Sloan, Courtney D.; Bailey, Ryan C.; Sligar, Stephen G.

    2012-01-01

    Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics. PMID:22686186

  18. Nonlinear analyte concentration gradients for one-step kinetic analysis employing optical microring resonators.

    PubMed

    Marty, Michael T; Sloan, Courtney D Kuhnline; Bailey, Ryan C; Sligar, Stephen G

    2012-07-03

    Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes, and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics.

  19. Advances in Analysis of Human Milk Oligosaccharides123

    PubMed Central

    Ruhaak, L. Renee; Lebrilla, Carlito B.

    2012-01-01

    Oligosaccharides in human milk strongly influence the composition of the gut microflora of neonates. Because it is now clear that the microflora play important roles in the development of the infant immune system, human milk oligosaccharides (HMO) are studied frequently. Milk samples contain complex mixtures of HMO, usually comprising several isomeric structures that can be either linear or branched. Traditionally, HMO profiling was performed using HPLC with fluorescence or UV detection. By using porous graphitic carbon liquid chromatography MS, it is now possible to separate and identify most of the isomers, facilitating linkage-specific analysis. Matrix-assisted laser desorption ionization time-of-flight analysis allows fast profiling, but does not allow isomer separation. Novel MS fragmentation techniques have facilitated structural characterization of HMO that are present at lower concentrations. These techniques now facilitate more accurate studies of HMO consumption as well as Lewis blood group determinations. PMID:22585919

  20. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide.

    PubMed

    Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Three different spectrophotometric methods manipulating ratio spectra for determination of binary mixture of Amlodipine and Atorvastatin

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeiny, Badr A.

    2011-12-01

    Three simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra are developed for the simultaneous determination of Amlodipine besylate (AM) and Atorvastatin calcium (AT) in tablet dosage forms. The first method is first derivative of the ratio spectra ( 1DD), the second is ratio subtraction and the third is the method of mean centering of ratio spectra. The calibration curve is linear over the concentration range of 3-40 and 8-32 μg/ml for AM and AT, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and they are applied to commercial pharmaceutical preparation of the subjected drugs. Standard deviation is <1.5 in the assay of raw materials and tablets. Methods are validated as per ICH guidelines and accuracy, precision, repeatability and robustness are found to be within the acceptable limit.

  2. Three different methods for determination of binary mixture of Amlodipine and Atorvastatin using dual wavelength spectrophotometry

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-03-01

    Three simple, specific, accurate and precise spectrophotometric methods depending on the proper selection of two wavelengths are developed for the simultaneous determination of Amlodipine besylate (AML) and Atorvastatin calcium (ATV) in tablet dosage forms. The first method is the new Ratio Difference method, the second method is the Bivariate method and the third one is the Absorbance Ratio method. The calibration curve is linear over the concentration range of 4-40 and 8-32 μg/mL for AML and ATV, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and they are applied to commercial pharmaceutical preparation of the subjected drugs. Methods are validated according to the ICH guidelines and accuracy, precision, repeatability and robustness are found to be within the acceptable limit. The mathematical explanation of the procedures is illustrated.

  3. Use of LANDSAT-1 data for the detection and mapping of saline seeps in Montana

    NASA Technical Reports Server (NTRS)

    May, G. A. (Principal Investigator); Petersen, G. W.

    1976-01-01

    The author has identified the following significant results. April, May, and August are the best times to detect saline seeps. Specific times within these months would be dependent upon weather, phenology, and growth conditions. Saline seeps can be efficiently and accurately mapped, within resolution capabilities, from merged May and August LANDSAT 1 data. Seeps were mapped by detecting salt crusts in the spring and indicator plants in the fall. These indicator plants were kochia, inkweed, and foxtail barley. The total hectares of the mapped saline seeps were calculated and tabulated. Saline seeps less than two hectares in size or that have linear configurations less than 200 meters in width were not mapped using the LANDSAT 1 data. Saline seep signatures developed in the Coffee Creek test site were extended to map saline seeps located outside this area.

  4. A Self-Calibrating Radar Sensor System for Measuring Vital Signs.

    PubMed

    Huang, Ming-Chun; Liu, Jason J; Xu, Wenyao; Gu, Changzhan; Li, Changzhi; Sarrafzadeh, Majid

    2016-04-01

    Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.

  5. Non-Linear System Identification for Aeroelastic Systems with Application to Experimental Data

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2008-01-01

    Representation and identification of a non-linear aeroelastic pitch-plunge system as a model of the NARMAX class is considered. A non-linear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (i) the outputs of the NARMAX model match closely those generated using continuous-time methods and (ii) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.

  6. Study on power grid characteristics in summer based on Linear regression analysis

    NASA Astrophysics Data System (ADS)

    Tang, Jin-hui; Liu, You-fei; Liu, Juan; Liu, Qiang; Liu, Zhuan; Xu, Xi

    2018-05-01

    The correlation analysis of power load and temperature is the precondition and foundation for accurate load prediction, and a great deal of research has been made. This paper constructed the linear correlation model between temperature and power load, then the correlation of fault maintenance work orders with the power load is researched. Data details of Jiangxi province in 2017 summer such as temperature, power load, fault maintenance work orders were adopted in this paper to develop data analysis and mining. Linear regression models established in this paper will promote electricity load growth forecast, fault repair work order review, distribution network operation weakness analysis and other work to further deepen the refinement.

  7. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models.

    PubMed

    Shah, A A; Xing, W W; Triantafyllidis, V

    2017-04-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.

  8. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models

    PubMed Central

    Xing, W. W.; Triantafyllidis, V.

    2017-01-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327

  9. An efficient direct solver for rarefied gas flows with arbitrary statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Manuel A., E-mail: f99543083@ntu.edu.tw; Yang, Jaw-Yen, E-mail: yangjy@iam.ntu.edu.tw; Center of Advanced Study in Theoretical Science, National Taiwan University, Taipei 10167, Taiwan

    2016-01-15

    A new numerical methodology associated with a unified treatment is presented to solve the Boltzmann–BGK equation of gas dynamics for the classical and quantum gases described by the Bose–Einstein and Fermi–Dirac statistics. Utilizing a class of globally-stiffly-accurate implicit–explicit Runge–Kutta scheme for the temporal evolution, associated with the discrete ordinate method for the quadratures in the momentum space and the weighted essentially non-oscillatory method for the spatial discretization, the proposed scheme is asymptotic-preserving and imposes no non-linear solver or requires the knowledge of fugacity and temperature to capture the flow structures in the hydrodynamic (Euler) limit. The proposed treatment overcomes themore » limitations found in the work by Yang and Muljadi (2011) [33] due to the non-linear nature of quantum relations, and can be applied in studying the dynamics of a gas with internal degrees of freedom with correct values of the ratio of specific heat for the flow regimes for all Knudsen numbers and energy wave lengths. The present methodology is numerically validated with the unified treatment by the one-dimensional shock tube problem and the two-dimensional Riemann problems for gases of arbitrary statistics. Descriptions of ideal quantum gases including rotational degrees of freedom have been successfully achieved under the proposed methodology.« less

  10. Part mutual information for quantifying direct associations in networks.

    PubMed

    Zhao, Juan; Zhou, Yiwei; Zhang, Xiujun; Chen, Luonan

    2016-05-03

    Quantitatively identifying direct dependencies between variables is an important task in data analysis, in particular for reconstructing various types of networks and causal relations in science and engineering. One of the most widely used criteria is partial correlation, but it can only measure linearly direct association and miss nonlinear associations. However, based on conditional independence, conditional mutual information (CMI) is able to quantify nonlinearly direct relationships among variables from the observed data, superior to linear measures, but suffers from a serious problem of underestimation, in particular for those variables with tight associations in a network, which severely limits its applications. In this work, we propose a new concept, "partial independence," with a new measure, "part mutual information" (PMI), which not only can overcome the problem of CMI but also retains the quantification properties of both mutual information (MI) and CMI. Specifically, we first defined PMI to measure nonlinearly direct dependencies between variables and then derived its relations with MI and CMI. Finally, we used a number of simulated data as benchmark examples to numerically demonstrate PMI features and further real gene expression data from Escherichia coli and yeast to reconstruct gene regulatory networks, which all validated the advantages of PMI for accurately quantifying nonlinearly direct associations in networks.

  11. Unique attributes of cyanobacterial metabolism revealed by improved genome-scale metabolic modeling and essential gene analysis

    DOE PAGES

    Broddrick, Jared T.; Rubin, Benjamin E.; Welkie, David G.; ...

    2016-12-20

    The model cyanobacterium, Synechococcus elongatus PCC 7942, is a genetically tractable obligate phototroph that is being developed for the bioproduction of high-value chemicals. Genome-scale models (GEMs) have been successfully used to assess and engineer cellular metabolism; however, GEMs of phototrophic metabolism have been limited by the lack of experimental datasets for model validation and the challenges of incorporating photon uptake. In this paper, we develop a GEM of metabolism in S. elongatus using random barcode transposon site sequencing (RB-TnSeq) essential gene and physiological data specific to photoautotrophic metabolism. The model explicitly describes photon absorption and accounts for shading, resulting inmore » the characteristic linear growth curve of photoautotrophs. GEM predictions of gene essentiality were compared with data obtained from recent dense-transposon mutagenesis experiments. This dataset allowed major improvements to the accuracy of the model. Furthermore, discrepancies between GEM predictions and the in vivo dataset revealed biological characteristics, such as the importance of a truncated, linear TCA pathway, low flux toward amino acid synthesis from photorespiration, and knowledge gaps within nucleotide metabolism. Finally, coupling of strong experimental support and photoautotrophic modeling methods thus resulted in a highly accurate model of S. elongatus metabolism that highlights previously unknown areas of S. elongatus biology.« less

  12. Unique attributes of cyanobacterial metabolism revealed by improved genome-scale metabolic modeling and essential gene analysis

    PubMed Central

    Broddrick, Jared T.; Rubin, Benjamin E.; Welkie, David G.; Du, Niu; Mih, Nathan; Diamond, Spencer; Lee, Jenny J.; Golden, Susan S.; Palsson, Bernhard O.

    2016-01-01

    The model cyanobacterium, Synechococcus elongatus PCC 7942, is a genetically tractable obligate phototroph that is being developed for the bioproduction of high-value chemicals. Genome-scale models (GEMs) have been successfully used to assess and engineer cellular metabolism; however, GEMs of phototrophic metabolism have been limited by the lack of experimental datasets for model validation and the challenges of incorporating photon uptake. Here, we develop a GEM of metabolism in S. elongatus using random barcode transposon site sequencing (RB-TnSeq) essential gene and physiological data specific to photoautotrophic metabolism. The model explicitly describes photon absorption and accounts for shading, resulting in the characteristic linear growth curve of photoautotrophs. GEM predictions of gene essentiality were compared with data obtained from recent dense-transposon mutagenesis experiments. This dataset allowed major improvements to the accuracy of the model. Furthermore, discrepancies between GEM predictions and the in vivo dataset revealed biological characteristics, such as the importance of a truncated, linear TCA pathway, low flux toward amino acid synthesis from photorespiration, and knowledge gaps within nucleotide metabolism. Coupling of strong experimental support and photoautotrophic modeling methods thus resulted in a highly accurate model of S. elongatus metabolism that highlights previously unknown areas of S. elongatus biology. PMID:27911809

  13. Inflammatory activity in Crohn disease: ultrasound findings.

    PubMed

    Migaleddu, Vincenzo; Quaia, Emilio; Scano, Domenico; Virgilio, Giuseppe

    2008-01-01

    Improvements in the ultrasound examination of bowel disease have registered in the last years the introduction of new technologies regarding high frequency probes (US), highly sensitive color or power Doppler units (CD-US), and the development of new non-linear technologies that optimize detection of contrast agents. Contrast-enhanced ultrasound (CE-US) most importantly increases the results in sonographic evaluation of Crohn disease inflammatory activity. CE-US has become an imaging modality routinely employed in the clinical practice for the evaluation of parenchymal organs due to the introduction of new generation microbubble contrast agents which persist in the bloodstream for several minutes after intravenous injection. The availability of high frequency dedicated contrast-specific US techniques provide accurate depiction of small bowel wall perfusion due to the extremely high sensitivity of non-linear signals produced by microbubble insonation. In Crohn's disease, CE-US may characterize the bowel wall thickness by differentiating fibrosis from edema and may grade the inflammatory disease activity by assessing the presence and distribution of vascularity within the layers of the bowel wall (submucosa alone or the entire bowel wall). Peri-intestinal inflammatory involvement can be also characterized. CE-US can provide prognostic data concerning clinical recurrence of the inflammatory disease and evaluate the efficacy of drugs treatments.

  14. Development and validation of a reversed-phase HPLC method for simultaneous estimation of ambroxol hydrochloride and azithromycin in tablet dosage form.

    PubMed

    Shaikh, K A; Patil, S D; Devkhile, A B

    2008-12-15

    A simple, precise and accurate reversed-phase liquid chromatographic method has been developed for the simultaneous estimation of ambroxol hydrochloride and azithromycin in tablet formulations. The chromatographic separation was achieved on a Xterra RP18 (250 mm x 4.6 mm, 5 microm) analytical column. A Mixture of acetonitrile-dipotassium phosphate (30 mM) (50:50, v/v) (pH 9.0) was used as the mobile phase, at a flow rate of 1.7 ml/min and detector wavelength at 215 nm. The retention time of ambroxol and azithromycin was found to be 5.0 and 11.5 min, respectively. The validation of the proposed method was carried out for specificity, linearity, accuracy, precision, limit of detection, limit of quantitation and robustness. The linear dynamic ranges were from 30-180 to 250-1500 microg/ml for ambroxol hydrochloride and azithromycin, respectively. The percentage recovery obtained for ambroxol hydrochloride and azithromycin were 99.40 and 99.90%, respectively. Limit of detection and quantification for azithromycin were 0.8 and 2.3 microg/ml, for ambroxol hydrochloride 0.004 and 0.01 microg/ml, respectively. The developed method can be used for routine quality control analysis of titled drugs in combination in tablet formulation.

  15. Multispectral photoacoustic tomography for detection of small tumors inside biological tissues

    NASA Astrophysics Data System (ADS)

    Hirasawa, Takeshi; Okawa, Shinpei; Tsujita, Kazuhiro; Kushibiki, Toshihiro; Fujita, Masanori; Urano, Yasuteru; Ishihara, Miya

    2018-02-01

    Visualization of small tumors inside biological tissue is important in cancer treatment because that promotes accurate surgical resection and enables therapeutic effect monitoring. For sensitive detection of tumor, we have been developing photoacoustic (PA) imaging technique to visualize tumor-specific contrast agents, and have already succeeded to image a subcutaneous tumor of a mouse using the contrast agents. To image tumors inside biological tissues, extension of imaging depth and improvement of sensitivity were required. In this study, to extend imaging depth, we developed a PA tomography (PAT) system that can image entire cross section of mice. To improve sensitivity, we discussed the use of the P(VDF-TrFE) linear array acoustic sensor that can detect PA signals with wide ranges of frequencies. Because PA signals produced from low absorbance optical absorbers shifts to low frequency, we hypothesized that the detection of low frequency PA signals improves sensitivity to low absorbance optical absorbers. We developed a PAT system with both a PZT linear array acoustic sensor and the P(VDF-TrFE) sensor, and performed experiment using tissue-mimicking phantoms to evaluate lower detection limits of absorbance. As a result, PAT images calculated from low frequency components of PA signals detected by the P(VDF-TrFE) sensor could visualize optical absorbers with lower absorbance.

  16. Unique attributes of cyanobacterial metabolism revealed by improved genome-scale metabolic modeling and essential gene analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broddrick, Jared T.; Rubin, Benjamin E.; Welkie, David G.

    The model cyanobacterium, Synechococcus elongatus PCC 7942, is a genetically tractable obligate phototroph that is being developed for the bioproduction of high-value chemicals. Genome-scale models (GEMs) have been successfully used to assess and engineer cellular metabolism; however, GEMs of phototrophic metabolism have been limited by the lack of experimental datasets for model validation and the challenges of incorporating photon uptake. In this paper, we develop a GEM of metabolism in S. elongatus using random barcode transposon site sequencing (RB-TnSeq) essential gene and physiological data specific to photoautotrophic metabolism. The model explicitly describes photon absorption and accounts for shading, resulting inmore » the characteristic linear growth curve of photoautotrophs. GEM predictions of gene essentiality were compared with data obtained from recent dense-transposon mutagenesis experiments. This dataset allowed major improvements to the accuracy of the model. Furthermore, discrepancies between GEM predictions and the in vivo dataset revealed biological characteristics, such as the importance of a truncated, linear TCA pathway, low flux toward amino acid synthesis from photorespiration, and knowledge gaps within nucleotide metabolism. Finally, coupling of strong experimental support and photoautotrophic modeling methods thus resulted in a highly accurate model of S. elongatus metabolism that highlights previously unknown areas of S. elongatus biology.« less

  17. Comparison of salivary collection and processing methods for quantitative HHV-8 detection.

    PubMed

    Speicher, D J; Johnson, N W

    2014-10-01

    Saliva is a proved diagnostic fluid for the qualitative detection of infectious agents, but the accuracy of viral load determinations is unknown. Stabilising fluids impede nucleic acid degradation, compared with collection onto ice and then freezing, and we have shown that the DNA Genotek P-021 prototype kit (P-021) can produce high-quality DNA after 14 months of storage at room temperature. Here we evaluate the quantitative capability of 10 collection/processing methods. Unstimulated whole mouth fluid was spiked with a mixture of HHV-8 cloned constructs, 10-fold serial dilutions were produced, and samples were extracted and then examined with quantitative PCR (qPCR). Calibration curves were compared by linear regression and qPCR dynamics. All methods extracted with commercial spin columns produced linear calibration curves with large dynamic range and gave accurate viral loads. Ethanol precipitation of the P-021 does not produce a linear standard curve, and virus is lost in the cell pellet. DNA extractions from the P-021 using commercial spin columns produced linear standard curves with wide dynamic range and excellent limit of detection. When extracted with spin columns, the P-021 enables accurate viral loads down to 23 copies μl(-1) DNA. The quantitative and long-term storage capability of this system makes it ideal for study of salivary DNA viruses in resource-poor settings. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Nonaxisymmetric Dynamic Instabilities of Rotating Polytropes. II. Torques, Bars, and Mode Saturation with Applications to Protostars and Fizzlers

    NASA Astrophysics Data System (ADS)

    Imamura, James N.; Durisen, Richard H.; Pickett, Brian K.

    2000-01-01

    Dynamic nonaxisymmetric instabilities in rapidly rotating stars and protostars have a range of potential applications in astrophysics, including implications for binary formation during protostellar cloud collapse and for the possibility of aborted collapse to neutron star densities at late stages of stellar evolution (``fizzlers''). We have recently presented detailed linear analyses for polytropes of the most dynamically unstable global modes, the barlike modes. These produce bar distortions in the regions near the rotation axis but have trailing spiral arms toward the equator. In this paper, we use our linear eigenfunctions to predict the early nonlinear behavior of the dynamic instability and compare these ``quasi-linear'' predictions with several fully nonlinear hydrodynamics simulations. The comparisons demonstrate that the nonlinear saturation of the barlike instability is due to the self-interaction gravitational torques between the growing central bar and the spiral arms, where angular momentum is transferred outward from bar to arms. We also find a previously unsuspected resonance condition that accurately predicts the mass of the bar regions in our own simulations and in those published by other researchers. The quasi-linear theory makes other accurate predictions about consequences of instability, including properties of possible end-state bars and increases in central density, which can be large under some conditions. We discuss in some detail the application of our results to binary formation during protostellar collapse and to the formation of massive rotating black holes.

  19. Effect of removing the common mode errors on linear regression analysis of noise amplitudes in position time series of a regional GPS network & a case study of GPS stations in Southern California

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Ma, Jun; Li, Zhao; Zhou, Xiaohui; Zhou, Boye

    2018-05-01

    The analysis of the correlations between the noise in different components of GPS stations has positive significance to those trying to obtain more accurate uncertainty of velocity with respect to station motion. Previous research into noise in GPS position time series focused mainly on single component evaluation, which affects the acquisition of precise station positions, the velocity field, and its uncertainty. In this study, before and after removing the common-mode error (CME), we performed one-dimensional linear regression analysis of the noise amplitude vectors in different components of 126 GPS stations with a combination of white noise, flicker noise, and random walking noise in Southern California. The results show that, on the one hand, there are above-moderate degrees of correlation between the white noise amplitude vectors in all components of the stations before and after removal of the CME, while the correlations between flicker noise amplitude vectors in horizontal and vertical components are enhanced from un-correlated to moderately correlated by removing the CME. On the other hand, the significance tests show that, all of the obtained linear regression equations, which represent a unique function of the noise amplitude in any two components, are of practical value after removing the CME. According to the noise amplitude estimates in two components and the linear regression equations, more accurate noise amplitudes can be acquired in the two components.

  20. Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs

    NASA Astrophysics Data System (ADS)

    Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.

    2013-07-01

    Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.

  1. Discovery of the linear region of Near Infrared Diffuse Reflectance spectra using the Kubelka-Munk theory

    NASA Astrophysics Data System (ADS)

    Dai, Shengyun; Pan, Xiaoning; Ma, Lijuan; Huang, Xingguo; Du, Chenzhao; Qiao, Yanjiang; Wu, Zhisheng

    2018-05-01

    Particle size is of great importance for the quantitative model of the NIR diffuse reflectance. In this paper, the effect of sample particle size on the measurement of harpagoside in Radix Scrophulariae powder by near infrared diffuse (NIR) reflectance spectroscopy was explored. High-performance liquid chromatography (HPLC) was employed as a reference method to construct the quantitative particle size model. Several spectral preprocessing methods were compared, and particle size models obtained by different preprocessing methods for establishing the partial least-squares (PLS) models of harpagoside. Data showed that the particle size distribution of 125-150 μm for Radix Scrophulariae exhibited the best prediction ability with R2pre=0.9513, RMSEP=0.1029 mg·g-1, and RPD = 4.78. For the hybrid granularity calibration model, the particle size distribution of 90-180 μm exhibited the best prediction ability with R2pre=0.8919, RMSEP=0.1632 mg·g-1, and RPD = 3.09. Furthermore, the Kubelka-Munk theory was used to relate the absorption coefficient k (concentration-dependent) and scatter coefficient s (particle size-dependent). The scatter coefficient s was calculated based on the Kubelka-Munk theory to study the changes of s after being mathematically preprocessed. A linear relationship was observed between k/s and absorption A within a certain range and the value for k/s was greater than 4. According to this relationship, the model was more accurately constructed with the particle size distribution of 90-180 μm when s was kept constant or in a small linear region. This region provided a good reference for the linear modeling of diffuse reflectance spectroscopy. To establish a diffuse reflectance NIR model, further accurate assessment should be obtained in advance for a precise linear model.

  2. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    PubMed

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  3. Accuracy of analytic energy level formulas applied to hadronic spectroscopy of heavy mesons

    NASA Technical Reports Server (NTRS)

    Badavi, Forooz F.; Norbury, John W.; Wilson, John W.; Townsend, Lawrence W.

    1988-01-01

    Linear and harmonic potential models are used in the nonrelativistic Schroedinger equation to obtain article mass spectra for mesons as bound states of quarks. The main emphasis is on the linear potential where exact solutions of the S-state eigenvalues and eigenfunctions and the asymptotic solution for the higher order partial wave are obtained. A study of the accuracy of two analytical energy level formulas as applied to heavy mesons is also included. Cornwall's formula is found to be particularly accurate and useful as a predictor of heavy quarkonium states. Exact solution for all partial waves of eigenvalues and eigenfunctions for a harmonic potential is also obtained and compared with the calculated discrete spectra of the linear potential. Detailed derivations of the eigenvalues and eigenfunctions of the linear and harmonic potentials are presented in appendixes.

  4. Influence of tungsten fiber’s slow drift on the measurement of G with angular acceleration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Jie; Wu, Wei-Huang; Zhan, Wen-Ze

    In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value ofmore » G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.« less

  5. Influence of tungsten fiber's slow drift on the measurement of G with angular acceleration method.

    PubMed

    Luo, Jie; Wu, Wei-Huang; Xue, Chao; Shao, Cheng-Gang; Zhan, Wen-Ze; Wu, Jun-Fei; Milyukov, Vadim

    2016-08-01

    In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.

  6. Influence of tungsten fiber's slow drift on the measurement of G with angular acceleration method

    NASA Astrophysics Data System (ADS)

    Luo, Jie; Wu, Wei-Huang; Xue, Chao; Shao, Cheng-Gang; Zhan, Wen-Ze; Wu, Jun-Fei; Milyukov, Vadim

    2016-08-01

    In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.

  7. Series elastic actuation of an elbow rehabilitation exoskeleton with axis misalignment adaptation.

    PubMed

    Wu, Kuan-Yi; Su, Yin-Yu; Yu, Ying-Lung; Lin, Kuei-You; Lan, Chao-Chieh

    2017-07-01

    Powered exoskeletons can facilitate rehabilitation of patients with upper limb disabilities. Designs using rotary motors usually result in bulky exoskeletons to reduce the problem of moving inertia. This paper presents a new linearly actuated elbow exoskeleton that consists of a slider crank mechanism and a linear motor. The linear motor is placed beside the upper arm and closer to shoulder joint. Thus better inertia properties can be achieved while lightweight and compactness are maintained. A passive joint is introduced to compensate for the exoskeleton-elbow misalignment and intersubject size variation. A linear series elastic actuator (SEA) is proposed to obtain accurate force and impedance control at the exoskeleton-elbow interface. Bidirectional actuation between exoskeleton and forearm is verified, which is required for various rehabilitation processes. We expect this exoskeleton can provide a means of robot-aided elbow rehabilitation.

  8. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  9. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  10. Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data

    NASA Astrophysics Data System (ADS)

    Singh, Vishwajit

    2016-04-01

    This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.

  11. Frame Shift/warp Compensation for the ARID Robot System

    NASA Technical Reports Server (NTRS)

    Latino, Carl D.

    1991-01-01

    The Automatic Radiator Inspection Device (ARID) is a system aimed at automating the tedious task of inspecting orbiter radiator panels. The ARID must have the ability to aim a camera accurately at the desired inspection points, which are in the order of 13,000. The ideal inspection points are known; however, the panel may be relocated due to inaccurate parking and warpage. A method of determining the mathematical description of a translated as well as a warped surface by accurate measurement of only a few points on this surface is developed here. The method uses a linear warp model whose effect is superimposed on the rigid body translation. Due to the angles involved, small angle approximations are possible, which greatly reduces the computational complexity. Given an accurate linear warp model, all the desired translation and warp parameters can be obtained by knowledge of the ideal locations of four fiducial points and the corresponding measurements of these points on the actual radiator surface. The method uses three of the fiducials to define a plane and the fourth to define the warp. Given this information, it is possible to determine a transformation that will enable the ARID system to translate any desired inspection point on the ideal surface to its corresponding value on the actual surface.

  12. 76 FR 53691 - Notice of Submission of Proposed Information Collection to OMB Section 8 Random Digit Dialing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-29

    ... specific areas in a relatively fast and accurate way that may be used to estimate and update Section 8 Fair... survey methodologies to collect gross rent data for specific areas in a relatively fast and accurate way...

  13. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  14. Interventional multispectral photoacoustic imaging with a clinical linear array ultrasound probe for guiding nerve blocks

    NASA Astrophysics Data System (ADS)

    Xia, Wenfeng; West, Simeon J.; Nikitichev, Daniil I.; Ourselin, Sebastien; Beard, Paul C.; Desjardins, Adrien E.

    2016-03-01

    Accurate identification of tissue structures such as nerves and blood vessels is critically important for interventional procedures such as nerve blocks. Ultrasound imaging is widely used as a guidance modality to visualize anatomical structures in real-time. However, identification of nerves and small blood vessels can be very challenging, and accidental intra-neural or intra-vascular injections can result in significant complications. Multi-spectral photoacoustic imaging can provide high sensitivity and specificity for discriminating hemoglobin- and lipid-rich tissues. However, conventional surface-illumination-based photoacoustic systems suffer from limited sensitivity at large depths. In this study, for the first time, an interventional multispectral photoacoustic imaging (IMPA) system was used to image nerves in a swine model in vivo. Pulsed excitation light with wavelengths in the ranges of 750 - 900 nm and 1150 - 1300 nm was delivered inside the body through an optical fiber positioned within the cannula of an injection needle. Ultrasound waves were received at the tissue surface using a clinical linear array imaging probe. Co-registered B-mode ultrasound images were acquired using the same imaging probe. Nerve identification was performed using a combination of B-mode ultrasound imaging and electrical stimulation. Using a linear model, spectral-unmixing of the photoacoustic data was performed to provide image contrast for oxygenated and de-oxygenated hemoglobin, water and lipids. Good correspondence between a known nerve location and a lipid-rich region in the photoacoustic images was observed. The results indicate that IMPA is a promising modality for guiding nerve blocks and other interventional procedures. Challenges involved with clinical translation are discussed.

  15. A Novel Kalman Filter for Human Motion Tracking With an Inertial-Based Dynamic Inclinometer.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo M

    2015-08-01

    Design and development of a linear Kalman filter to create an inertial-based inclinometer targeted to dynamic conditions of motion. The estimation of the body attitude (i.e., the inclination with respect to the vertical) was treated as a source separation problem to discriminate the gravity and the body acceleration from the specific force measured by a triaxial accelerometer. The sensor fusion between triaxial gyroscope and triaxial accelerometer data was performed using a linear Kalman filter. Wrist-worn inertial measurement unit data from ten participants were acquired while performing two dynamic tasks: 60-s sequence of seven manual activities and 90 s of walking at natural speed. Stereophotogrammetric data were used as a reference. A statistical analysis was performed to assess the significance of the accuracy improvement over state-of-the-art approaches. The proposed method achieved, on an average, a root mean square attitude error of 3.6° and 1.8° in manual activities and locomotion tasks (respectively). The statistical analysis showed that, when compared to few competing methods, the proposed method improved the attitude estimation accuracy. A novel Kalman filter for inertial-based attitude estimation was presented in this study. A significant accuracy improvement was achieved over state-of-the-art approaches, due to a filter design that better matched the basic optimality assumptions of Kalman filtering. Human motion tracking is the main application field of the proposed method. Accurately discriminating the two components present in the triaxial accelerometer signal is well suited for studying both the rotational and the linear body kinematics.

  16. Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach

    NASA Astrophysics Data System (ADS)

    Liu, Wenyang; Sawant, Amit; Ruan, Dan

    2016-07-01

    The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.

  17. A mathematical approach to beam matching

    PubMed Central

    Manikandan, A; Nandy, M; Gossman, M S; Sureka, C S; Ray, A; Sujatha, N

    2013-01-01

    Objective: This report provides the mathematical commissioning instructions for the evaluation of beam matching between two different linear accelerators. Methods: Test packages were first obtained including an open beam profile, a wedge beam profile and a depth–dose curve, each from a 10×10 cm2 beam. From these plots, a spatial error (SE) and a percentage dose error were introduced to form new plots. These three test package curves and the associated error curves were then differentiated in space with respect to dose for a first and second derivative to determine the slope and curvature of each data set. The derivatives, also known as bandwidths, were analysed to determine the level of acceptability for the beam matching test described in this study. Results: The open and wedged beam profiles and depth–dose curve in the build-up region were determined to match within 1% dose error and 1-mm SE at 71.4% and 70.8% for of all points, respectively. For the depth–dose analysis specifically, beam matching was achieved for 96.8% of all points at 1%/1 mm beyond the depth of maximum dose. Conclusion: To quantify the beam matching procedure in any clinic, the user needs to merely generate test packages from their reference linear accelerator. It then follows that if the bandwidths are smooth and continuous across the profile and depth, there is greater likelihood of beam matching. Differentiated spatial and percentage variation analysis is appropriate, ideal and accurate for this commissioning process. Advances in knowledge: We report a mathematically rigorous formulation for the qualitative evaluation of beam matching between linear accelerators. PMID:23995874

  18. Childhood stunting: a global perspective

    PubMed Central

    Branca, Francesco

    2016-01-01

    Abstract Childhood stunting is the best overall indicator of children's well‐being and an accurate reflection of social inequalities. Stunting is the most prevalent form of child malnutrition with an estimated 161 million children worldwide in 2013 falling below −2 SD from the length‐for‐age/height‐for‐age World Health Organization Child Growth Standards median. Many more millions suffer from some degree of growth faltering as the entire length‐for‐age/height‐for‐age z‐score distribution is shifted to the left indicating that all children, and not only those falling below a specific cutoff, are affected. Despite global consensus on how to define and measure it, stunting often goes unrecognized in communities where short stature is the norm as linear growth is not routinely assessed in primary health care settings and it is difficult to visually recognize it. Growth faltering often begins in utero and continues for at least the first 2 years of post‐natal life. Linear growth failure serves as a marker of multiple pathological disorders associated with increased morbidity and mortality, loss of physical growth potential, reduced neurodevelopmental and cognitive function and an elevated risk of chronic disease in adulthood. The severe irreversible physical and neurocognitive damage that accompanies stunted growth poses a major threat to human development. Increased awareness of stunting's magnitude and devastating consequences has resulted in its being identified as a major global health priority and the focus of international attention at the highest levels with global targets set for 2025 and beyond. The challenge is to prevent linear growth failure while keeping child overweight and obesity at bay. PMID:27187907

  19. A New Rapid and Sensitive Stability-Indicating UPLC Assay Method for Tolterodine Tartrate: Application in Pharmaceuticals, Human Plasma and Urine Samples.

    PubMed

    Yanamandra, Ramesh; Vadla, Chandra Sekhar; Puppala, Umamaheshwar; Patro, Balaram; Murthy, Yellajyosula L N; Ramaiah, Parimi Atchuta

    2012-01-01

    A new rapid, simple, sensitive, selective and accurate reversed-phase stability-indicating Ultra Performance Liquid Chromatography (RP-UPLC) technique was developed for the assay of Tolterodine Tartrate in pharmaceutical dosage form, human plasma and urine samples. The developed UPLC method is superior in technology to conventional HPLC with respect to speed, solvent consumption, resolution and cost of analysis. Chromatographic run time was 6 min in reversed-phase mode and ultraviolet detection was carried out at 220 nm for quantification. Efficient separation was achieved for all the degradants of Tolterodine Tartrate on BEH C18 sub-2-μm Acquity UPLC column using Trifluoroacetic acid and acetonitrile as organic solvent in a linear gradient program. The active pharmaceutical ingredient was extracted from tablet dosage form using a mixture of acetonitrile and water as diluent. The calibration graphs were linear and the method showed excellent recoveries for bulk and tablet dosage form. The test solution was found to be stable for 40 days when stored in the refrigerator between 2 and 8 °C. The developed UPLC method was validated and meets the requirements delineated by the International Conference on Harmonization (ICH) guidelines with respect to linearity, accuracy, precision, specificity and robustness. The intra-day and inter-day variation was found be less than 1%. The method was reproducible and selective for the estimation of Tolterodine Tartrate. Because the method could effectively separate the drug from its degradation products, it can be employed as a stability-indicating one.

  20. A New Rapid and Sensitive Stability-Indicating UPLC Assay Method for Tolterodine Tartrate: Application in Pharmaceuticals, Human Plasma and Urine Samples

    PubMed Central

    Yanamandra, Ramesh; Vadla, Chandra Sekhar; Puppala, Umamaheshwar; Patro, Balaram; Murthy, Yellajyosula. L. N.; Ramaiah, Parimi Atchuta

    2012-01-01

    A new rapid, simple, sensitive, selective and accurate reversed-phase stability-indicating Ultra Performance Liquid Chromatography (RP-UPLC) technique was developed for the assay of Tolterodine Tartrate in pharmaceutical dosage form, human plasma and urine samples. The developed UPLC method is superior in technology to conventional HPLC with respect to speed, solvent consumption, resolution and cost of analysis. Chromatographic run time was 6 min in reversed-phase mode and ultraviolet detection was carried out at 220 nm for quantification. Efficient separation was achieved for all the degradants of Tolterodine Tartrate on BEH C18 sub-2-μm Acquity UPLC column using Trifluoroacetic acid and acetonitrile as organic solvent in a linear gradient program. The active pharmaceutical ingredient was extracted from tablet dosage form using a mixture of acetonitrile and water as diluent. The calibration graphs were linear and the method showed excellent recoveries for bulk and tablet dosage form. The test solution was found to be stable for 40 days when stored in the refrigerator between 2 and 8 °C. The developed UPLC method was validated and meets the requirements delineated by the International Conference on Harmonization (ICH) guidelines with respect to linearity, accuracy, precision, specificity and robustness. The intra-day and inter-day variation was found be less than 1%. The method was reproducible and selective for the estimation of Tolterodine Tartrate. Because the method could effectively separate the drug from its degradation products, it can be employed as a stability-indicating one. PMID:22396907

Top