Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
A study of the temporal robustness of the growing global container-shipping network
Wang, Nuo; Wu, Nuan; Dong, Ling-ling; Yan, Hua-kun; Wu, Di
2016-01-01
Whether they thrive as they grow must be determined for all constantly expanding networks. However, few studies have focused on this important network feature or the development of quantitative analytical methods. Given the formation and growth of the global container-shipping network, we proposed the concept of network temporal robustness and quantitative method. As an example, we collected container liner companies’ data at two time points (2004 and 2014) and built a shipping network with ports as nodes and routes as links. We thus obtained a quantitative value of the temporal robustness. The temporal robustness is a significant network property because, for the first time, we can clearly recognize that the shipping network has become more vulnerable to damage over the last decade: When the node failure scale reached 50% of the entire network, the temporal robustness was approximately −0.51% for random errors and −12.63% for intentional attacks. The proposed concept and analytical method described in this paper are significant for other network studies. PMID:27713549
An improved 3D MoF method based on analytical partial derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2016-12-01
MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.
The study of nanomaterials in environmental systems requires robust and specific analytical methods. Analytical methods which discriminate based on particle size and molecular composition are not widely available. Asymmetric Flow Field-Flow Fractionation (AF4) is a separation...
Code of Federal Regulations, 2013 CFR
2013-01-01
... robust analytical methods. The Department seeks to use qualitative and quantitative analytical methods... uncertainties will be carried forward in subsequent analyses. The use of quantitative models will be... manufacturers and other interested parties. The use of quantitative models will be supplemented by qualitative...
Code of Federal Regulations, 2012 CFR
2012-01-01
... robust analytical methods. The Department seeks to use qualitative and quantitative analytical methods... uncertainties will be carried forward in subsequent analyses. The use of quantitative models will be... manufacturers and other interested parties. The use of quantitative models will be supplemented by qualitative...
Code of Federal Regulations, 2014 CFR
2014-01-01
... robust analytical methods. The Department seeks to use qualitative and quantitative analytical methods... uncertainties will be carried forward in subsequent analyses. The use of quantitative models will be... manufacturers and other interested parties. The use of quantitative models will be supplemented by qualitative...
Robust electroencephalogram phase estimation with applications in brain-computer interface systems.
Seraj, Esmaeil; Sameni, Reza
2017-03-01
In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.
Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models
2002-03-01
such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most
Qiu, Huazhang; Wu, Namei; Zheng, Yanjie; Chen, Min; Weng, Shaohuang; Chen, Yuanzhong; Lin, Xinhua
2015-01-01
A robust and versatile signal-on fluorescence sensing strategy was developed to provide label-free detection of various target analytes. The strategy used SYBR Green I dye and graphene oxide as signal reporter and signal-to-background ratio enhancer, respectively. Multidrug resistance protein 1 (MDR1) gene and mercury ion (Hg2+) were selected as target analytes to investigate the generality of the method. The linear relationship and specificity of the detections showed that the sensitive and selective analyses of target analytes could be achieved by the proposed strategy with low detection limits of 0.5 and 2.2 nM for MDR1 gene and Hg2+, respectively. Moreover, the strategy was used to detect real samples. Analytical results of MDR1 gene in the serum indicated that the developed method is a promising alternative approach for real applications in complex systems. Furthermore, the recovery of the proposed method for Hg2+ detection was acceptable. Thus, the developed label-free signal-on fluorescence sensing strategy exhibited excellent universality, sensitivity, and handling convenience. PMID:25565810
Sokoliess, Torsten; Köller, Gerhard
2005-06-01
A chiral capillary electrophoresis system allowing the determination of the enantiomeric purity of an investigational new drug was developed using a generic method development approach for basic analytes. The method was optimized in terms of type and concentration of both cyclodextrin (CD) and electrolyte, buffer pH, temperature, voltage, and rinsing procedure. Optimal chiral separation of the analyte was obtained using an electrolyte with 2.5% carboxymethyl-beta-CD in 25 mM NaH2PO4 (pH 4.0). Interchanging the inlet and outlet vials after each run improved the method's precision. To assure the method's suitability for the control of enantiomeric impurities in pharmaceutical quality control, its specificity, linearity, precision, accuracy, and robustness were validated according to the requirements of the International Conference on Harmonization. The usefulness of our generic method development approach for the validation of robustness was demonstrated.
Wang, Lu; Qu, Haibin
2016-03-01
A method combining solid phase extraction, high performance liquid chromatography, and ultraviolet/evaporative light scattering detection (SPE-HPLC-UV/ELSD) was developed according to Quality by Design (QbD) principles and used to assay nine bioactive compounds within a botanical drug, Shenqi Fuzheng Injection. Risk assessment and a Plackett-Burman design were utilized to evaluate the impact of 11 factors on the resolutions and signal-to-noise of chromatographic peaks. Multiple regression and Pareto ranking analysis indicated that the sorbent mass, sample volume, flow rate, column temperature, evaporator temperature, and gas flow rate were statistically significant (p < 0.05) in this procedure. Furthermore, a Box-Behnken design combined with response surface analysis was employed to study the relationships between the quality of SPE-HPLC-UV/ELSD analysis and four significant factors, i.e., flow rate, column temperature, evaporator temperature, and gas flow rate. An analytical design space of SPE-HPLC-UV/ELSD was then constructed by calculated Monte Carlo probability. In the presented approach, the operating parameters of sample preparation, chromatographic separation, and compound detection were investigated simultaneously. Eight terms of method validation, i.e., system-suitability tests, method robustness/ruggedness, sensitivity, precision, repeatability, linearity, accuracy, and stability, were accomplished at a selected working point. These results revealed that the QbD principles were suitable in the development of analytical procedures for samples in complex matrices. Meanwhile, the analytical quality and method robustness were validated by the analytical design space. The presented strategy provides a tutorial on the development of a robust QbD-compliant quantitative method for samples in complex matrices.
Determination of Perfluorinated Compounds in the Upper Mississippi River Basin
Despite ongoing efforts to develop robust analytical methods for the determination of perfluorinated compounds (PFCs) such as perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA) in surface water, comparatively little has been published on method performance, and the...
Microemulsification: an approach for analytical determinations.
Lima, Renato S; Shiroma, Leandro Y; Teixeira, Alvaro V N C; de Toledo, José R; do Couto, Bruno C; de Carvalho, Rogério M; Carrilho, Emanuel; Kubota, Lauro T; Gobbi, Angelo L
2014-09-16
We address a novel method for analytical determinations that combines simplicity, rapidity, low consumption of chemicals, and portability with high analytical performance taking into account parameters such as precision, linearity, robustness, and accuracy. This approach relies on the effect of the analyte content over the Gibbs free energy of dispersions, affecting the thermodynamic stabilization of emulsions or Winsor systems to form microemulsions (MEs). Such phenomenon was expressed by the minimum volume fraction of amphiphile required to form microemulsion (Φ(ME)), which was the analytical signal of the method. Thus, the measurements can be taken by visually monitoring the transition of the dispersions from cloudy to transparent during the microemulsification, like a titration. It bypasses the employment of electric energy. The performed studies were: phase behavior, droplet dimension by dynamic light scattering, analytical curve, and robustness tests. The reliability of the method was evaluated by determining water in ethanol fuels and monoethylene glycol in complex samples of liquefied natural gas. The dispersions were composed of water-chlorobenzene (water analysis) and water-oleic acid (monoethylene glycol analysis) with ethanol as the hydrotrope phase. The mean hydrodynamic diameter values for the nanostructures in the droplet-based water-chlorobenzene MEs were in the range of 1 to 11 nm. The procedures of microemulsification were conducted by adding ethanol to water-oleic acid (W-O) mixtures with the aid of micropipette and shaking. The Φ(ME) measurements were performed in a thermostatic water bath at 23 °C by direct observation that is based on the visual analyses of the media. The experiments to determine water demonstrated that the analytical performance depends on the composition of ME. It shows flexibility in the developed method. The linear range was fairly broad with limits of linearity up to 70.00% water in ethanol. For monoethylene glycol in water, in turn, the linear range was observed throughout the volume fraction of analyte. The best limits of detection were 0.32% v/v water to ethanol and 0.30% v/v monoethylene glycol to water. Furthermore, the accuracy was highly satisfactory. The natural gas samples provided by the Petrobras exhibited color, particulate material, high ionic strength, and diverse compounds as metals, carboxylic acids, and anions. These samples had a conductivity of up to 2630 μS cm(-1); the conductivity of pure monoethylene glycol was only 0.30 μS cm(-1). Despite such downsides, the method allowed accurate measures bypassing steps such as extraction, preconcentration, and dilution of the sample. In addition, the levels of robustness were promising. This parameter was evaluated by investigating the effect of (i) deviations in volumetric preparation of the dispersions and (ii) changes in temperature over the analyte contents recorded by the method.
Developing Appropriate Methods for Cost-Effectiveness Analysis of Cluster Randomized Trials
Gomes, Manuel; Ng, Edmond S.-W.; Nixon, Richard; Carpenter, James; Thompson, Simon G.
2012-01-01
Aim. Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Methods. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering—seemingly unrelated regression (SUR) without a robust standard error (SE)—and 4 methods that recognized clustering—SUR and generalized estimating equations (GEEs), both with robust SE, a “2-stage” nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Results. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92–0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. Conclusions. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters. PMID:22016450
Developing appropriate methods for cost-effectiveness analysis of cluster randomized trials.
Gomes, Manuel; Ng, Edmond S-W; Grieve, Richard; Nixon, Richard; Carpenter, James; Thompson, Simon G
2012-01-01
Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering--seemingly unrelated regression (SUR) without a robust standard error (SE)--and 4 methods that recognized clustering--SUR and generalized estimating equations (GEEs), both with robust SE, a "2-stage" nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92-0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters.
Creating analytically divergence-free velocity fields from grid-based data
NASA Astrophysics Data System (ADS)
Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.
2016-10-01
We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most numerical integration methods.
Per- and Polyfluoroalkyl Substances (PFAS): Sampling ...
Per- and polyfluoroalkyl substances (PFAS) are a large group of manufactured compounds used in a variety of industries, such as aerospace, automotive, textiles, and electronics, and are used in some food packaging and firefighting materials. For example, they may be used to make products more resistant to stains, grease and water. In the environment, some PFAS break down very slowly, if at all, allowing bioaccumulation (concentration) to occur in humans and wildlife. Some have been found to be toxic to laboratory animals, producing reproductive, developmental, and systemic effects in laboratory tests. EPA's methods for analyzing PFAS in environmental media are in various stages of development. This technical brief summarizes the work being done to develop robust analytical methods for groundwater, surface water, wastewater, and solids, including soils, sediments, and biosolids. The U.S. Environmental Protection Agency’s (EPA) methods for analyzing PFAS in environmental media are in various stages of development. EPA is working to develop robust analytical methods for groundwater, surface water, wastewater, and solids, including soils, sediments, and biosolids.
Automated UHPLC separation of 10 pharmaceutical compounds using software-modeling.
Zöldhegyi, A; Rieger, H-J; Molnár, I; Fekhretdinova, L
2018-03-20
Human mistakes are still one of the main reasons of underlying regulatory affairs that in a compliance with FDA's Data Integrity and Analytical Quality by Design (AQbD) must be eliminated. To develop smooth, fast and robust methods that are free of human failures, a state-of-the-art automation was presented. For the scope of this study, a commercial software (DryLab) and a model mixture of 10 drugs were subjected to testing. Following AQbD-principles, the best available working point was selected and conformational experimental runs, i.e. the six worst cases of the conducted robustness calculation, were performed. Simulated results were found to be in excellent agreement with the experimental ones, proving the usefulness and effectiveness of an automated, software-assisted analytical method development. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy
2017-06-01
Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.
Robust nonlinear control of vectored thrust aircraft
NASA Technical Reports Server (NTRS)
Doyle, John C.; Murray, Richard; Morris, John
1993-01-01
An interdisciplinary program in robust control for nonlinear systems with applications to a variety of engineering problems is outlined. Major emphasis will be placed on flight control, with both experimental and analytical studies. This program builds on recent new results in control theory for stability, stabilization, robust stability, robust performance, synthesis, and model reduction in a unified framework using Linear Fractional Transformations (LFT's), Linear Matrix Inequalities (LMI's), and the structured singular value micron. Most of these new advances have been accomplished by the Caltech controls group independently or in collaboration with researchers in other institutions. These recent results offer a new and remarkably unified framework for all aspects of robust control, but what is particularly important for this program is that they also have important implications for system identification and control of nonlinear systems. This combines well with Caltech's expertise in nonlinear control theory, both in geometric methods and methods for systems with constraints and saturations.
Kim, SungHwan; Lin, Chien-Wei; Tseng, George C
2016-07-01
Supervised machine learning is widely applied to transcriptomic data to predict disease diagnosis, prognosis or survival. Robust and interpretable classifiers with high accuracy are usually favored for their clinical and translational potential. The top scoring pair (TSP) algorithm is an example that applies a simple rank-based algorithm to identify rank-altered gene pairs for classifier construction. Although many classification methods perform well in cross-validation of single expression profile, the performance usually greatly reduces in cross-study validation (i.e. the prediction model is established in the training study and applied to an independent test study) for all machine learning methods, including TSP. The failure of cross-study validation has largely diminished the potential translational and clinical values of the models. The purpose of this article is to develop a meta-analytic top scoring pair (MetaKTSP) framework that combines multiple transcriptomic studies and generates a robust prediction model applicable to independent test studies. We proposed two frameworks, by averaging TSP scores or by combining P-values from individual studies, to select the top gene pairs for model construction. We applied the proposed methods in simulated data sets and three large-scale real applications in breast cancer, idiopathic pulmonary fibrosis and pan-cancer methylation. The result showed superior performance of cross-study validation accuracy and biomarker selection for the new meta-analytic framework. In conclusion, combining multiple omics data sets in the public domain increases robustness and accuracy of the classification model that will ultimately improve disease understanding and clinical treatment decisions to benefit patients. An R package MetaKTSP is available online. (http://tsenglab.biostat.pitt.edu/software.htm). ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.
Advances in Adaptive Control Methods
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2009-01-01
This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.
Improving the Analysis of Anthocyanidins from Blueberries Using Response Surface Methodology
USDA-ARS?s Scientific Manuscript database
Background: Recent interest in the health promoting potential of anthocyanins points to the need for robust and reliable analytical methods. It is essential to know that the health promoting chemicals are present in juices and other products processed from whole fruit. Many different methods have be...
Togola, Anne; Coureau, Charlotte; Guezennec, Anne-Gwenaëlle; Touzé, Solène
2015-05-01
The presence of acrylamide in natural systems is of concern from both environmental and health points of view. We developed an accurate and robust analytical procedure (offline solid phase extraction combined with UPLC/MS/MS) with a limit of quantification (20 ng L(-1)) compatible with toxicity threshold values. The optimized (considering the nature of extraction phases, sampling volumes, and solvent of elution) solid phase extraction (SPE) was validated according to ISO Standard ISO/IEC 17025 on groundwater, surface water, and industrial process water samples. Acrylamide is highly polar, which induces a high variability during the SPE step, therefore requiring the use of C(13)-labeled acrylamide as an internal standard to guarantee the accuracy and robustness of the method (uncertainty about 25 % (k = 2) at limit of quantification level). The specificity of the method and the stability of acrylamide were studied for these environmental media, and it was shown that the method is suitable for measuring acrylamide in environmental studies.
Terzić, Jelena; Popović, Igor; Stajić, Ana; Tumpa, Anja; Jančić-Stojanović, Biljana
2016-06-05
This paper deals with the development of hydrophilic interaction liquid chromatographic (HILIC) method for the analysis of bilastine and its degradation impurities following Analytical Quality by Design approach. It is the first time that the method for bilastine and its impurities is proposed. The main objective was to identify the conditions where an adequate separation in minimal analysis duration could be achieved within a robust region. Critical process parameters which have the most influence on method performance were defined as acetonitrile content in the mobile phase, pH of the aqueous phase and ammonium acetate concentration in the aqueous phase. Box-Behnken design was applied for establishing a relationship between critical process parameters and critical quality attributes. The defined mathematical models and Monte Carlo simulations were used to identify the design space. Fractional factorial design was applied for experimental robustness testing and the method is validated to verify the adequacy of selected optimal conditions: the analytical column Luna(®) HILIC (100mm×4.6mm, 5μm particle size); mobile phase consisted of acetonitrile-aqueous phase (50mM ammonium acetate, pH adjusted to 5.3 with glacial acetic acid) (90.5:9.5, v/v); column temperature 30°C, mobile phase flow rate 1mLmin(-1), wavelength of detection 275nm. Copyright © 2016 Elsevier B.V. All rights reserved.
Life cycle management of analytical methods.
Parr, Maria Kristina; Schmidt, Alexander H
2018-01-05
In modern process management, the life cycle concept gains more and more importance. It focusses on the total costs of the process from invest to operation and finally retirement. Also for analytical procedures an increasing interest for this concept exists in the recent years. The life cycle of an analytical method consists of design, development, validation (including instrumental qualification, continuous method performance verification and method transfer) and finally retirement of the method. It appears, that also regulatory bodies have increased their awareness on life cycle management for analytical methods. Thus, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), as well as the United States Pharmacopeial Forum discuss the enrollment of new guidelines that include life cycle management of analytical methods. The US Pharmacopeia (USP) Validation and Verification expert panel already proposed a new General Chapter 〈1220〉 "The Analytical Procedure Lifecycle" for integration into USP. Furthermore, also in the non-regulated environment a growing interest on life cycle management is seen. Quality-by-design based method development results in increased method robustness. Thereby a decreased effort is needed for method performance verification, and post-approval changes as well as minimized risk of method related out-of-specification results. This strongly contributes to reduced costs of the method during its life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.
Sheldon, E M; Downar, J B
2000-08-15
Novel approaches to the development of analytical procedures for monitoring incoming starting material in support of chemical/pharmaceutical processes are described. High technology solutions were utilized for timely process development and preparation of high quality clinical supplies. A single robust HPLC method was developed and characterized for the analysis of the key starting material from three suppliers. Each supplier used a different process for the preparation of this material and, therefore, each suppliers' material exhibited a unique impurity profile. The HPLC method utilized standard techniques acceptable for release testing in a QC/manufacturing environment. An automated experimental design protocol was used to characterize the robustness of the HPLC method. The method was evaluated for linearity, limit of quantitation, solution stability, and precision of replicate injections. An LC-MS method that emulated the release HPLC method was developed and the identities of impurities were mapped between the two methods.
Han, Sung-Ho; Farshchi-Heydari, Salman; Hall, David J
2010-01-20
A novel time-domain optical method to reconstruct the relative concentration, lifetime, and depth of a fluorescent inclusion is described. We establish an analytical method for the estimations of these parameters for a localized fluorescent object directly from the simple evaluations of continuous wave intensity, exponential decay, and temporal position of the maximum of the fluorescence temporal point-spread function. Since the more complex full inversion process is not involved, this method permits a robust and fast processing in exploring the properties of a fluorescent inclusion. This method is confirmed by in vitro and in vivo experiments. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Condensed tannins (CTs) consist of oligomers and polymers of flavan-3-ol subunits varying in hydroxylation patterns, cis- and trans-configuration of C-ring substituents, interflavan bond connections, mean degree of polymerization (mDP), and extent of esterification. Robust analytical methods to dete...
Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L
2012-10-01
Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.
Foster, Scott D; Feutry, Pierre; Grewe, Peter M; Berry, Oliver; Hui, Francis K C; Davies, Campbell R
2018-06-26
Delineating naturally occurring and self-sustaining sub-populations (stocks) of a species is an important task, especially for species harvested from the wild. Despite its central importance to natural resource management, analytical methods used to delineate stocks are often, and increasingly, borrowed from superficially similar analytical tasks in human genetics even though models specifically for stock identification have been previously developed. Unfortunately, the analytical tasks in resource management and human genetics are not identical { questions about humans are typically aimed at inferring ancestry (often referred to as 'admixture') rather than breeding stocks. In this article, we argue, and show through simulation experiments and an analysis of yellowfin tuna data, that ancestral analysis methods are not always appropriate for stock delineation. In this work, we advocate a variant of a previouslyintroduced and simpler model that identifies stocks directly. We also highlight that the computational aspects of the analysis, irrespective of the model, are difficult. We introduce some alternative computational methods and quantitatively compare these methods to each other and to established methods. We also present a method for quantifying uncertainty in model parameters and in assignment probabilities. In doing so, we demonstrate that point estimates can be misleading. One of the computational strategies presented here, based on an expectation-maximisation algorithm with judiciously chosen starting values, is robust and has a modest computational cost. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Robust estimation for ordinary differential equation models.
Cao, J; Wang, L; Xu, J
2011-12-01
Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.
Robustness of fit indices to outliers and leverage observations in structural equation modeling.
Yuan, Ke-Hai; Zhong, Xiaoling
2013-06-01
Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
An analytic model for footprint dispersions and its application to mission design
NASA Technical Reports Server (NTRS)
Rao, J. R. Jagannatha; Chen, Yi-Chao
1992-01-01
This is the final report on our recent research activities that are complementary to those conducted by our colleagues, Professor Farrokh Mistree and students, in the context of the Taguchi method. We have studied the mathematical model that forms the basis of the Simulation and Optimization of Rocket Trajectories (SORT) program and developed an analytic method for determining mission reliability with a reduced number of flight simulations. This method can be incorporated in a design algorithm to mathematically optimize different performance measures of a mission, thus leading to a robust and easy-to-use methodology for mission planning and design.
Gika, Helen G; Theodoridis, Georgios A; Earll, Mark; Wilson, Ian D
2012-09-01
An approach to the determination of day-to-day analytical robustness of LC-MS-based methods for global metabolic profiling using a pooled QC sample is presented for the evaluation of metabonomic/metabolomic data. A set of 60 urine samples were repeatedly analyzed on five different days and the day-to-day reproducibility of the data obtained was determined. Multivariate statistical analysis was performed with the aim of evaluating variability and selected peaks were assessed and validated in terms of retention time stability, mass accuracy and intensity. The methodology enables the repeatability/reproducibility of extended analytical runs in large-scale studies to be determined, allowing the elimination of analytical (as opposed to biological) variability, in order to discover true patterns and correlations within the data. The day-to-day variability of the data revealed by this process suggested that, for this particular system, 3 days continuous operation was possible without the need for maintenance and cleaning. Variation was generally based on signal intensity changes over the 7-day period of the study, and was mainly a result of source contamination.
Analytical study of robustness of a negative feedback oscillator by multiparameter sensitivity
2014-01-01
Background One of the distinctive features of biological oscillators such as circadian clocks and cell cycles is robustness which is the ability to resume reliable operation in the face of different types of perturbations. In the previous study, we proposed multiparameter sensitivity (MPS) as an intelligible measure for robustness to fluctuations in kinetic parameters. Analytical solutions directly connect the mechanisms and kinetic parameters to dynamic properties such as period, amplitude and their associated MPSs. Although negative feedback loops are known as common structures to biological oscillators, the analytical solutions have not been presented for a general model of negative feedback oscillators. Results We present the analytical expressions for the period, amplitude and their associated MPSs for a general model of negative feedback oscillators. The analytical solutions are validated by comparing them with numerical solutions. The analytical solutions explicitly show how the dynamic properties depend on the kinetic parameters. The ratio of a threshold to the amplitude has a strong impact on the period MPS. As the ratio approaches to one, the MPS increases, indicating that the period becomes more sensitive to changes in kinetic parameters. We present the first mathematical proof that the distributed time-delay mechanism contributes to making the oscillation period robust to parameter fluctuations. The MPS decreases with an increase in the feedback loop length (i.e., the number of molecular species constituting the feedback loop). Conclusions Since a general model of negative feedback oscillators was employed, the results shown in this paper are expected to be true for many of biological oscillators. This study strongly supports that the hypothesis that phosphorylations of clock proteins contribute to the robustness of circadian rhythms. The analytical solutions give synthetic biologists some clues to design gene oscillators with robust and desired period. PMID:25605374
Posch, Tjorben Nils; Pütz, Michael; Martin, Nathalie; Huhn, Carolin
2015-01-01
In this review we introduce the advantages and limitations of electromigrative separation techniques in forensic toxicology. We thus present a summary of illustrative studies and our own experience in the field together with established methods from the German Federal Criminal Police Office rather than a complete survey. We focus on the analytical aspects of analytes' physicochemical characteristics (e.g. polarity, stereoisomers) and analytical challenges including matrix tolerance, separation from compounds present in large excess, sample volumes, and orthogonality. For these aspects we want to reveal the specific advantages over more traditional methods. Both detailed studies and profiling and screening studies are taken into account. Care was taken to nearly exclusively document well-validated methods outstanding for the analytical challenge discussed. Special attention was paid to aspects exclusive to electromigrative separation techniques, including the use of the mobility axis, the potential for on-site instrumentation, and the capillary format for immunoassays. The review concludes with an introductory guide to method development for different separation modes, presenting typical buffer systems as starting points for different analyte classes. The objective of this review is to provide an orientation for users in separation science considering using capillary electrophoresis in their laboratory in the future.
Robust Adaptive Dynamic Programming of Two-Player Zero-Sum Games for Continuous-Time Linear Systems.
Fu, Yue; Fu, Jun; Chai, Tianyou
2015-12-01
In this brief, an online robust adaptive dynamic programming algorithm is proposed for two-player zero-sum games of continuous-time unknown linear systems with matched uncertainties, which are functions of system outputs and states of a completely unknown exosystem. The online algorithm is developed using the policy iteration (PI) scheme with only one iteration loop. A new analytical method is proposed for convergence proof of the PI scheme. The sufficient conditions are given to guarantee globally asymptotic stability and suboptimal property of the closed-loop system. Simulation studies are conducted to illustrate the effectiveness of the proposed method.
Study of Cyclodextrin-Based Polymers to Extract Patulin from Apple Juice
USDA-ARS?s Scientific Manuscript database
Synthetic sorbents offer a means to develop more robust materials to detect analytes in complex matrices, including methods to detect naturally occurring contaminants in agricultural commodities. Patulin is a mold metabolite associated with rotting apples and poses health risks to humans and animal...
NASA Astrophysics Data System (ADS)
Cao, Lu; Verbeek, Fons J.
2012-03-01
In computer graphics and visualization, reconstruction of a 3D surface from a point cloud is an important research area. As the surface contains information that can be measured, i.e. expressed in features, the application of surface reconstruction can be potentially important for application in bio-imaging. Opportunities in this application area are the motivation for this study. In the past decade, a number of algorithms for surface reconstruction have been proposed. Generally speaking, these methods can be separated into two categories: i.e., explicit representation and implicit approximation. Most of the aforementioned methods are firmly based in theory; however, so far, no analytical evaluation between these methods has been presented. The straightforward way of evaluation has been by convincing through visual inspection. Through evaluation we search for a method that can precisely preserve the surface characteristics and that is robust in the presence of noise. The outcome will be used to improve reliability in surface reconstruction of biological models. We, therefore, use an analytical approach by selecting features as surface descriptors and measure these features in varying conditions. We selected surface distance, surface area and surface curvature as three major features to compare quality of the surface created by the different algorithms. Our starting point has been ground truth values obtained from analytical shapes such as the sphere and the ellipsoid. In this paper we present four classical surface reconstruction methods from the two categories mentioned above, i.e. the Power Crust, the Robust Cocone, the Fourier-based method and the Poisson reconstruction method. The results obtained from our experiments indicate that Poisson reconstruction method performs the best in the presence of noise.
El-Awady, Mohamed; Belal, Fathalla; Pyell, Ute
2013-09-27
The analysis of hydrophobic basic analytes by micellar electrokinetic chromatography (MEKC) is usually challenging because of the tendency of these analytes to be adsorbed onto the inner capillary wall in addition to the difficulty to separate these compounds as they exhibit extremely high retention factors. A robust and reliable method for the simultaneous determination of loratadine (LOR) and its major metabolite desloratadine (DSL) is developed based on cyclodextrin-modified micellar electrokinetic chromatography (CD-MEKC) with acidic sample matrix and basic background electrolyte (BGE). The influence of the sample matrix on the reachable focusing efficiency is studied. It is shown that the application of a low pH sample solution mitigates problems associated with the low solubility of the hydrophobic basic analytes in aqueous solution while having advantages with regard to on-line focusing. Moreover, the use of a basic BGE reduces the adsorption of these analytes in the separation compartment. The separation of the studied analytes is achieved in less than 7min using a BGE consisting of 10mmolL(-1) disodium tetraborate buffer, pH 9.30 containing 40mmolL(-1) SDS and 20mmolL(-1) hydroxypropyl-β-CD while the sample solution is composed of 10mmolL(-1) phosphoric acid, pH 2.15. A full validation study of the developed method based on the pharmacopeial guidelines is performed. The method is successfully applied to the analysis of the studied drugs in tablets without interference of tablet additives as well as the analysis of spiked human urine without any sample pretreatment. Furthermore, DSL can be detected as an impurity in LOR bulk powder at the stated pharmacopeial limit (0.1%, w/w). The selectivity of the developed method allows the analysis of LOR and DSL in combination with the co-formulated drug pseudoephedrine. It is shown that in CD-MEKC with basic BGE, solute-wall interactions are effectively suppressed allowing the development of efficient and precise methods for the determination of hydrophobic basic analytes, whereas the use of a low pH sample solution has a positive impact on the attainable sweeping efficiency without compromising peak shape and resolution. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Ehwang; Gao, Yuqian; Wu, Chaochao
Here, mass spectrometry (MS) based targeted proteomic methods such as selected reaction monitoring (SRM) are becoming the method of choice for preclinical verification of candidate protein biomarkers. The Clinical Proteomic Tumor Analysis Consortium (CPTAC) of the National Cancer Institute has investigated the standardization and analytical validation of the SRM assays and demonstrated robust analytical performance on different instruments across different laboratories. An Assay Portal has also been established by CPTAC to provide the research community a resource consisting of large set of targeted MS-based assays, and a depository to share assays publicly, providing that assays meet the guidelines proposed bymore » CPTAC. Herein, we report 98 SRM assays covering 70 candidate protein biomarkers previously reported as associated with ovarian cancer that have been thoroughly characterized according to the CPTAC Assay Characterization Guidance Document. The experiments, methods and results for characterizing these SRM assays for their MS response, repeatability, selectivity, stability, and reproducible detection of endogenous analytes are described in detail.« less
Jenke, Dennis; Sadain, Salma; Nunez, Karen; Byrne, Frances
2007-01-01
The performance of an ion chromatographic method for measuring citrate and phosphate in pharmaceutical solutions is evaluated. Performance characteristics examined include accuracy, precision, specificity, response linearity, robustness, and the ability to meet system suitability criteria. In general, the method is found to be robust within reasonable deviations from its specified operating conditions. Analytical accuracy is typically 100 +/- 3%, and short-term precision is not more than 1.5% relative standard deviation. The instrument response is linear over a range of 50% to 150% of the standard preparation target concentrations (12 mg/L for phosphate and 20 mg/L for citrate), and the results obtained using a single-point standard versus a calibration curve are essentially equivalent. A small analytical bias is observed and ascribed to the relative purity of the differing salts, used as raw materials in tested finished products and as reference standards in the analytical method. The assay is specific in that no phosphate or citrate peaks are observed in a variety of method-related solutions and matrix blanks (with and without autoclaving). The assay with manual preparation of the eluents is sensitive to the composition of the eluent in the sense that the eluent must be effectively degassed and protected from CO(2) ingress during use. In order for the assay to perform effectively, extensive system equilibration and conditioning is required. However, a properly conditioned and equilibrated system can be used to test a number of samples via chromatographic runs that include many (> 50) injections.
NASA Astrophysics Data System (ADS)
Gupta, Lokesh Kumar
2012-11-01
Seven process related impurities were identified by LC-MS in the atorvastatin calcium drug substance. These impurities were identified by LC-MS. The structure of impurities was confirmed by modern spectroscopic techniques like 1H NMR and IR and physicochemical studies conducted by using synthesized authentic reference compounds. The synthesized reference samples of the impurity compounds were used for the quantitative HPLC determination. These impurities were detected by newly developed gradient, reverse phase high performance liquid chromatographic (HPLC) method. The system suitability of HPLC analysis established the validity of the separation. The analytical method was validated according to International Conference of Harmonization (ICH) with respect to specificity, precision, accuracy, linearity, robustness and stability of analytical solutions to demonstrate the power of newly developed HPLC method.
Analytical redundancy and the design of robust failure detection systems
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The Failure Detection and Identification (FDI) process is viewed as consisting of two stages: residual generation and decision making. It is argued that a robust FDI system can be achieved by designing a robust residual generation process. Analytical redundancy, the basis for residual generation, is characterized in terms of a parity space. Using the concept of parity relations, residuals can be generated in a number of ways and the design of a robust residual generation process can be formulated as a minimax optimization problem. An example is included to illustrate this design methodology. Previously announcedd in STAR as N83-20653
Zhou, Xu; Yang, Long; Tan, Xiaoping; Zhao, Genfu; Xie, Xiaoguang; Du, Guanben
2018-07-30
Prostate specific antigen (PSA) is the most significant biomarker for the screening of prostate cancer in human serum. However, most methods for the detection of PSA often require major laboratories, precisely analytical instruments and complicated operations. Currently, the design and development of satisfying electrochemical biosensors based on biomimetic materials (e.g. synthetic receptors) and nanotechnology is highly desired. Thus, we focused on the combination of molecular recognition and versatile nanomaterials in electrochemical devices for advancing their analytical performance and robustness. Herein, by using the present prepared multifunctional hydroxyl pillar[5]arene@gold nanoparticles@graphitic carbon nitride (HP5@AuNPs@g-C 3 N 4 ) hybrid nanomaterial as robust biomimetic element, a high-performance electrochemical immunosensor for detection of PSA was constructed. The as-prepared immunosensor, with typically competitive advantages of low cost, simple preparation and fast detection, exhibited remarkable robustness, ultra-sensitivity, excellent selectivity and reproducibility. The limit of detection (LOD) and linear range were 0.12 pg mL -1 (S/N = 3) and 0.0005-10.00 ng mL -1 , respectively. The satisfying results provide a promising approach for clinical detection of PSA in human serum. Copyright © 2018 Elsevier B.V. All rights reserved.
Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill
2012-01-01
In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226
Martinc, Boštjan; Roškar, Robert; Grabnar, Iztok; Vovk, Tomaž
2014-07-01
Therapeutic drug monitoring (TDM) of antiepileptic drugs (AEDs) has been recognized as a useful tool in management of epilepsy. We developed a simple analytical method for simultaneous determination of four second generation AEDs, including gabapentin (GBP), pregabalin (PGB), vigabatrin (VGB), and topiramate (TOP). Analytes were extracted from human plasma using universal solid phase extraction, derivatized with 4-chloro-7-nitrobenzofurazan (NBD-Cl) and analyzed by HPLC with fluorescence detection. Using mass spectrometry we confirmed that NBD-Cl reacts with sulfamate group of TOP similarly as with amine group of the other three analytes. The method is linear (r(2)>0.998) across investigated analytical ranges (0.375-30.0μg/mL for GBP, PGB, and VGB; 0.50-20.0μg/mL for TOP). Intraday and interday precision do not exceed 9.40%. The accuracy is from 95.6% to 106%. The recovery is higher than 80.6%, and the lower limit of quantification is at least 0.5μg/mL. The method is selective and robust. For TOP determination the method was compared to a previously published method and the results obtained by the two methods were in good agreement. The developed method is suitable for routine TDM. Copyright © 2014 Elsevier B.V. All rights reserved.
Yun, Changhong; Dashwood, Wan-Mohaiza; Kwong, Lawrence N; Gao, Song; Yin, Taijun; Ling, Qinglan; Singh, Rashim; Dashwood, Roderick H; Hu, Ming
2018-01-30
An accurate and reliable UPLC-MS/MS method is reported for the quantification of endogenous Prostaglandin E2 (PGE 2 ) in rat colonic mucosa and polyps. This method adopted the "surrogate analyte plus authentic bio-matrix" approach, using two different stable isotopic labeled analogs - PGE 2 -d9 as the surrogate analyte and PGE 2 -d4 as the internal standard. A quantitative standard curve was constructed with the surrogate analyte in colonic mucosa homogenate, and the method was successfully validated with the authentic bio-matrix. Concentrations of endogenous PGE 2 in both normal and inflammatory tissue homogenates were back-calculated based on the regression equation. Because of no endogenous interference on the surrogate analyte determination, the specificity was particularly good. By using authentic bio-matrix for validation, the matrix effect and exaction recovery are identically same for the quantitative standard curve and actual samples - this notably increased the assay accuracy. The method is easy, fast, robust and reliable for colon PGE 2 determination. This "surrogate analyte" approach was applied to measure the Pirc (an Apc-mutant rat kindred that models human FAP) mucosa and polyps PGE 2 , one of the strong biomarkers of colorectal cancer. A similar concept could be applied to endogenous biomarkers in other tissues. Copyright © 2017 Elsevier B.V. All rights reserved.
Developments in Cylindrical Shell Stability Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Starnes, James H., Jr.
1998-01-01
Today high-performance computing systems and new analytical and numerical techniques enable engineers to explore the use of advanced materials for shell design. This paper reviews some of the historical developments of shell buckling analysis and design. The paper concludes by identifying key research directions for reliable and robust methods development in shell stability analysis and design.
Martín-Sabroso, Cristina; Tavares-Fernandes, Daniel Filipe; Espada-García, Juan Ignacio; Torres-Suárez, Ana Isabel
2013-12-15
In this work a protocol to validate analytical procedures for the quantification of drug substances formulated in polymeric systems that comprise both drug entrapped into the polymeric matrix (assay:content test) and drug released from the systems (assay:dissolution test) is developed. This protocol is applied to the validation two isocratic HPLC analytical procedures for the analysis of dexamethasone phosphate disodium microparticles for parenteral administration. Preparation of authentic samples and artificially "spiked" and "unspiked" samples is described. Specificity (ability to quantify dexamethasone phosphate disodium in presence of constituents of the dissolution medium and other microparticle constituents), linearity, accuracy and precision are evaluated, in the range from 10 to 50 μg mL(-1) in the assay:content test procedure and from 0.25 to 10 μg mL(-1) in the assay:dissolution test procedure. The robustness of the analytical method to extract drug from microparticles is also assessed. The validation protocol developed allows us to conclude that both analytical methods are suitable for their intended purpose, but the lack of proportionality of the assay:dissolution analytical method should be taken into account. The validation protocol designed in this work could be applied to the validation of any analytical procedure for the quantification of drugs formulated in controlled release polymeric microparticles. Copyright © 2013 Elsevier B.V. All rights reserved.
Fallback options for airgap sensor fault of an electromagnetic suspension system
NASA Astrophysics Data System (ADS)
Michail, Konstantinos; Zolotas, Argyrios C.; Goodall, Roger M.
2013-06-01
The paper presents a method to recover the performance of an electromagnetic suspension under faulty airgap sensor. The proposed control scheme is a combination of classical control loops, a Kalman Estimator and analytical redundancy (for the airgap signal). In this way redundant airgap sensors are not essential for reliable operation of this system. When the airgap sensor fails the required signal is recovered using a combination of a Kalman estimator and analytical redundancy. The performance of the suspension is optimised using genetic algorithms and some preliminary robustness issues to load and operating airgap variations are discussed. Simulations on a realistic model of such type of suspension illustrate the efficacy of the proposed sensor tolerant control method.
A general statistical test for correlations in a finite-length time series.
Hanson, Jeffery A; Yang, Haw
2008-06-07
The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.
Analysis of gene network robustness based on saturated fixed point attractors
2014-01-01
The analysis of gene network robustness to noise and mutation is important for fundamental and practical reasons. Robustness refers to the stability of the equilibrium expression state of a gene network to variations of the initial expression state and network topology. Numerical simulation of these variations is commonly used for the assessment of robustness. Since there exists a great number of possible gene network topologies and initial states, even millions of simulations may be still too small to give reliable results. When the initial and equilibrium expression states are restricted to being saturated (i.e., their elements can only take values 1 or −1 corresponding to maximum activation and maximum repression of genes), an analytical gene network robustness assessment is possible. We present this analytical treatment based on determination of the saturated fixed point attractors for sigmoidal function models. The analysis can determine (a) for a given network, which and how many saturated equilibrium states exist and which and how many saturated initial states converge to each of these saturated equilibrium states and (b) for a given saturated equilibrium state or a given pair of saturated equilibrium and initial states, which and how many gene networks, referred to as viable, share this saturated equilibrium state or the pair of saturated equilibrium and initial states. We also show that the viable networks sharing a given saturated equilibrium state must follow certain patterns. These capabilities of the analytical treatment make it possible to properly define and accurately determine robustness to noise and mutation for gene networks. Previous network research conclusions drawn from performing millions of simulations follow directly from the results of our analytical treatment. Furthermore, the analytical results provide criteria for the identification of model validity and suggest modified models of gene network dynamics. The yeast cell-cycle network is used as an illustration of the practical application of this analytical treatment. PMID:24650364
Peraman, R.; Bhadraya, K.; Reddy, Y. Padmanabha; Reddy, C. Surayaprakash; Lokesh, T.
2015-01-01
By considering the current regulatory requirement for an analytical method development, a reversed phase high performance liquid chromatographic method for routine analysis of etofenamate in dosage form has been optimized using analytical quality by design approach. Unlike routine approach, the present study was initiated with understanding of quality target product profile, analytical target profile and risk assessment for method variables that affect the method response. A liquid chromatography system equipped with a C18 column (250×4.6 mm, 5 μ), a binary pump and photodiode array detector were used in this work. The experiments were conducted based on plan by central composite design, which could save time, reagents and other resources. Sigma Tech software was used to plan and analyses the experimental observations and obtain quadratic process model. The process model was used for predictive solution for retention time. The predicted data from contour diagram for retention time were verified actually and it satisfied with actual experimental data. The optimized method was achieved at 1.2 ml/min flow rate of using mobile phase composition of methanol and 0.2% triethylamine in water at 85:15, % v/v, pH adjusted to 6.5. The method was validated and verified for targeted method performances, robustness and system suitability during method transfer. PMID:26997704
Snee, Lawrence W.
2002-01-01
40Ar/39Ar geochronology is an experimentally robust and versatile method for constraining time and temperature in geologic processes. The argon method is the most broadly applied in mineral-deposit studies. Standard analytical methods and formulations exist, making the fundamentals of the method well defined. A variety of graphical representations exist for evaluating argon data. A broad range of minerals found in mineral deposits, alteration zones, and host rocks commonly is analyzed to provide age, temporal duration, and thermal conditions for mineralization events and processes. All are discussed in this report. The usefulness of and evolution of the applicability of the method are demonstrated in studies of the Panasqueira, Portugal, tin-tungsten deposit; the Cornubian batholith and associated mineral deposits, southwest England; the Red Mountain intrusive system and associated Urad-Henderson molybdenum deposits; and the Eastern Goldfields Province, Western Australia.
NASA Astrophysics Data System (ADS)
Jabbari, Ali
2018-01-01
Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.
Exploration of robust operating conditions in inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Tromp, John W.; Pomares, Mario; Alvarez-Prieto, Manuel; Cole, Amanda; Ying, Hai; Salin, Eric D.
2003-11-01
'Robust' conditions, as defined by Mermet and co-workers for inductively coupled plasma (ICP)-atomic emission spectrometry, minimize matrix effects on analyte signals, and are obtained by increasing power and reducing nebulizer gas flow. In ICP-mass spectrometry (MS), it is known that reduced nebulizer gas flow usually leads to more robust conditions such that matrix effects are reduced. In this work, robust conditions for ICP-MS have been determined by optimizing for accuracy in the determination of analytes in a multi-element solution with various interferents (Al, Ba, Cs, K, Na), by varying power, nebulizer gas flow, sample introduction rate and ion lens voltage. The goal of the work was to determine which operating parameters were the most important in reducing matrix effects, and whether different interferents yielded the same robust conditions. Reduction in nebulizer gas flow and in sample input rate led to a significantly decreased interference, while an increase in power seemed to have a lesser effect. Once the other parameters had been adjusted to their robust values, there was no additional improvement in accuracy attainable by adjusting the ion lens voltage. The robust conditions were universal, since, for all the interferents and analytes studied, the optimum was found at the same operating conditions. One drawback to the use of robust conditions was the slightly reduced sensitivity; however, in the context of 'intelligent' instruments, the concept of 'robust conditions' is useful in many cases.
Applications of Raman Spectroscopy in Biopharmaceutical Manufacturing: A Short Review.
Buckley, Kevin; Ryder, Alan G
2017-06-01
The production of active pharmaceutical ingredients (APIs) is currently undergoing its biggest transformation in a century. The changes are based on the rapid and dramatic introduction of protein- and macromolecule-based drugs (collectively known as biopharmaceuticals) and can be traced back to the huge investment in biomedical science (in particular in genomics and proteomics) that has been ongoing since the 1970s. Biopharmaceuticals (or biologics) are manufactured using biological-expression systems (such as mammalian, bacterial, insect cells, etc.) and have spawned a large (>€35 billion sales annually in Europe) and growing biopharmaceutical industry (BioPharma). The structural and chemical complexity of biologics, combined with the intricacy of cell-based manufacturing, imposes a huge analytical burden to correctly characterize and quantify both processes (upstream) and products (downstream). In small molecule manufacturing, advances in analytical and computational methods have been extensively exploited to generate process analytical technologies (PAT) that are now used for routine process control, leading to more efficient processes and safer medicines. In the analytical domain, biologic manufacturing is considerably behind and there is both a huge scope and need to produce relevant PAT tools with which to better control processes, and better characterize product macromolecules. Raman spectroscopy, a vibrational spectroscopy with a number of useful properties (nondestructive, non-contact, robustness) has significant potential advantages in BioPharma. Key among them are intrinsically high molecular specificity, the ability to measure in water, the requirement for minimal (or no) sample pre-treatment, the flexibility of sampling configurations, and suitability for automation. Here, we review and discuss a representative selection of the more important Raman applications in BioPharma (with particular emphasis on mammalian cell culture). The review shows that the properties of Raman have been successfully exploited to deliver unique and useful analytical solutions, particularly for online process monitoring. However, it also shows that its inherent susceptibility to fluorescence interference and the weakness of the Raman effect mean that it can never be a panacea. In particular, Raman-based methods are intrinsically limited by the chemical complexity and wide analyte-concentration-profiles of cell culture media/bioprocessing broths which limit their use for quantitative analysis. Nevertheless, with appropriate foreknowledge of these limitations and good experimental design, robust analytical methods can be produced. In addition, new technological developments such as time-resolved detectors, advanced lasers, and plasmonics offer potential of new Raman-based methods to resolve existing limitations and/or provide new analytical insights.
Song, Ehwang; Gao, Yuqian; Wu, Chaochao; ...
2017-07-19
Here, mass spectrometry (MS) based targeted proteomic methods such as selected reaction monitoring (SRM) are becoming the method of choice for preclinical verification of candidate protein biomarkers. The Clinical Proteomic Tumor Analysis Consortium (CPTAC) of the National Cancer Institute has investigated the standardization and analytical validation of the SRM assays and demonstrated robust analytical performance on different instruments across different laboratories. An Assay Portal has also been established by CPTAC to provide the research community a resource consisting of large set of targeted MS-based assays, and a depository to share assays publicly, providing that assays meet the guidelines proposed bymore » CPTAC. Herein, we report 98 SRM assays covering 70 candidate protein biomarkers previously reported as associated with ovarian cancer that have been thoroughly characterized according to the CPTAC Assay Characterization Guidance Document. The experiments, methods and results for characterizing these SRM assays for their MS response, repeatability, selectivity, stability, and reproducible detection of endogenous analytes are described in detail.« less
Non-negative Tensor Factorization for Robust Exploratory Big-Data Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian; Vesselinov, Velimir Valentinov; Djidjev, Hristo Nikolov
Currently, large multidimensional datasets are being accumulated in almost every field. Data are: (1) collected by distributed sensor networks in real-time all over the globe, (2) produced by large-scale experimental measurements or engineering activities, (3) generated by high-performance simulations, and (4) gathered by electronic communications and socialnetwork activities, etc. Simultaneous analysis of these ultra-large heterogeneous multidimensional datasets is often critical for scientific discoveries, decision-making, emergency response, and national and global security. The importance of such analyses mandates the development of the next-generation of robust machine learning (ML) methods and tools for bigdata exploratory analysis.
Chaos and Robustness in a Single Family of Genetic Oscillatory Networks
Fu, Daniel; Tan, Patrick; Kuznetsov, Alexey; Molkov, Yaroslav I.
2014-01-01
Genetic oscillatory networks can be mathematically modeled with delay differential equations (DDEs). Interpreting genetic networks with DDEs gives a more intuitive understanding from a biological standpoint. However, it presents a problem mathematically, for DDEs are by construction infinitely-dimensional and thus cannot be analyzed using methods common for systems of ordinary differential equations (ODEs). In our study, we address this problem by developing a method for reducing infinitely-dimensional DDEs to two- and three-dimensional systems of ODEs. We find that the three-dimensional reductions provide qualitative improvements over the two-dimensional reductions. We find that the reducibility of a DDE corresponds to its robustness. For non-robust DDEs that exhibit high-dimensional dynamics, we calculate analytic dimension lines to predict the dependence of the DDEs’ correlation dimension on parameters. From these lines, we deduce that the correlation dimension of non-robust DDEs grows linearly with the delay. On the other hand, for robust DDEs, we find that the period of oscillation grows linearly with delay. We find that DDEs with exclusively negative feedback are robust, whereas DDEs with feedback that changes its sign are not robust. We find that non-saturable degradation damps oscillations and narrows the range of parameter values for which oscillations exist. Finally, we deduce that natural genetic oscillators with highly-regular periods likely have solely negative feedback. PMID:24667178
SSD for R: A Comprehensive Statistical Package to Analyze Single-System Data
ERIC Educational Resources Information Center
Auerbach, Charles; Schudrich, Wendy Zeitlin
2013-01-01
The need for statistical analysis in single-subject designs presents a challenge, as analytical methods that are applied to group comparison studies are often not appropriate in single-subject research. "SSD for R" is a robust set of statistical functions with wide applicability to single-subject research. It is a comprehensive package…
Robust quantum control using smooth pulses and topological winding
NASA Astrophysics Data System (ADS)
Barnes, Edwin; Wang, Xin
2015-03-01
Perhaps the greatest challenge in achieving control of microscopic quantum systems is the decoherence induced by the environment, a problem which pervades experimental quantum physics and is particularly severe in the context of solid state quantum computing and nanoscale quantum devices because of the inherently strong coupling to the surrounding material. We present an analytical approach to constructing intrinsically robust driving fields which automatically cancel the leading-order noise-induced errors in a qubit's evolution exactly. We address two of the most common types of non-Markovian noise that arise in qubits: slow fluctuations of the qubit energy splitting and fluctuations in the driving field itself. We demonstrate our method by constructing robust quantum gates for several types of spin qubits, including phosphorous donors in silicon and nitrogen-vacancy centers in diamond. Our results constitute an important step toward achieving robust generic control of quantum systems, bringing their novel applications closer to realization. Work supported by LPS-CMTC.
Petruzziello, Filomena; Grand-Guillaume Perrenoud, Alexandre; Thorimbert, Anita; Fogwill, Michael; Rezzi, Serge
2017-07-18
Analytical solutions enabling the quantification of circulating levels of liposoluble micronutrients such as vitamins and carotenoids are currently limited to either single or a reduced panel of analytes. The requirement to use multiple approaches hampers the investigation of the biological variability on a large number of samples in a time and cost efficient manner. With the goal to develop high-throughput and robust quantitative methods for the profiling of micronutrients in human plasma, we introduce a novel, validated workflow for the determination of 14 fat-soluble vitamins and carotenoids in a single run. Automated supported liquid extraction was optimized and implemented to simultaneously parallelize 48 samples in 1 h, and the analytes were measured using ultrahigh-performance supercritical fluid chromatography coupled to tandem mass spectrometry in less than 8 min. An improved mass spectrometry interface hardware was built up to minimize the post-decompression volume and to allow better control of the chromatographic effluent density on its route toward and into the ion source. In addition, a specific make-up solvent condition was developed to ensure both analytes and matrix constituents solubility after mobile phase decompression. The optimized interface resulted in improved spray plume stability and conserved matrix compounds solubility leading to enhanced hyphenation robustness while ensuring both suitable analytical repeatability and improved the detection sensitivity. The overall developed methodology gives recoveries within 85-115%, as well as within and between-day coefficient of variation of 2 and 14%, respectively.
Tran, Ngoc Han; Chen, Hongjie; Do, Thanh Van; Reinhard, Martin; Ngo, Huu Hao; He, Yiliang; Gin, Karina Yew-Hoong
2016-10-01
A robust and sensitive analytical method was developed for the simultaneous analysis of 21 target antimicrobials in different environmental water samples. Both single SPE and tandem SPE cartridge systems were investigated to simultaneously extract multiple classes of antimicrobials. Experimental results showed that good extraction efficiencies (84.5-105.6%) were observed for the vast majority of the target analytes when extraction was performed using the tandem SPE cartridge (SB+HR-X) system under an extraction pH of 3.0. HPLC-MS/MS parameters were optimized for simultaneous analysis of all the target analytes in a single injection. Quantification of target antimicrobials in water samples was accomplished using 15 isotopically labeled internal standards (ILISs), which allowed the efficient compensation of the losses of target analytes during sample preparation and correction of matrix effects during UHPLC-MS/MS as well as instrument fluctuations in MS/MS signal intensity. Method quantification limit (MQL) for most target analytes based on SPE was below 5ng/L for surface waters, 10ng/L for treated wastewater effluents, and 15ng/L for raw wastewater. The method was successfully applied to detect and quantify the occurrence of the target analytes in raw influent, treated effluent and surface water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta
2017-02-01
The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.
An accurate computational method for the diffusion regime verification
NASA Astrophysics Data System (ADS)
Zhokh, Alexey A.; Strizhak, Peter E.
2018-04-01
The diffusion regime (sub-diffusive, standard, or super-diffusive) is defined by the order of the derivative in the corresponding transport equation. We develop an accurate computational method for the direct estimation of the diffusion regime. The method is based on the derivative order estimation using the asymptotic analytic solutions of the diffusion equation with the integer order and the time-fractional derivatives. The robustness and the computational cheapness of the proposed method are verified using the experimental methane and methyl alcohol transport kinetics through the catalyst pellet.
Adjustment of pesticide concentrations for temporal changes in analytical recovery, 1992–2010
Martin, Jeffrey D.; Eberle, Michael
2011-01-01
Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ("spiked" QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as a percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in apparent environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report presents data and models related to the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as "pesticides") that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 through 2010 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Models of recovery, based on robust, locally weighted scatterplot smooths (lowess smooths) of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.
NASA Astrophysics Data System (ADS)
Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.
We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
Xiong, Wei; Tao, Xiaoqiu; Pang, Su; Yang, Xue; Tang, GangLing; Bian, Zhaoyang
2014-01-01
A method for the determination of three acidic herbicides, dicamba, 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) in tobacco and soil has been developed based on the use of liquid-liquid extraction and dispersive solid-phase extraction (dispersive-SPE) followed by UPLC-MS/MS. Two percentage of (v/v) formic acid in acetonitrile as the extraction helped partitioning of analytes into the acetonitrile phase. The extract was then cleaned up by dispersive-SPE using primary secondary amine as selective sorbents. Quantitative analysis was done in the multiple-reaction monitoring mode using stable isotope-labeled internal standards for each compound. A separate internal standard for each analyte is required to minimize sample matrix effects on each analyte, which can lead to poor analyte recoveries and decreases in method accuracy and precision. The total analysis time was <4 min. The linear range of the method was from 1 to 100 ng mL(-1) with a limit of detection of each herbicide varied from 0.012 to 0.126 ng g(-1). The proposed method is faster, more sensitive and selective than the traditional methods and more accurate and robust than the published LC-MS/MS methods. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Davis, Mark D; Wade, Erin L; Restrepo, Paula R; Roman-Esteva, William; Bravo, Roberto; Kuklenyik, Peter; Calafat, Antonia M
2013-06-15
Organophosphate and pyrethroid insecticides and phenoxyacetic acid herbicides represent important classes of pesticides applied in commercial and residential settings. Interest in assessing the extent of human exposure to these pesticides exists because of their widespread use and their potential adverse health effects. An analytical method for measuring 12 biomarkers of several of these pesticides in urine has been developed. The target analytes were extracted from one milliliter of urine by a semi-automated solid phase extraction technique, separated from each other and from other urinary biomolecules by reversed-phase high performance liquid chromatography, and detected using tandem mass spectrometry with isotope dilution quantitation. This method can be used to measure all the target analytes in one injection with similar repeatability and detection limits of previous methods which required more than one injection. Each step of the procedure was optimized to produce a robust, reproducible, accurate, precise and efficient method. The required selectivity and sensitivity for trace-level analysis (e.g., limits of detection below 0.5ng/mL) was achieved using a narrow diameter analytical column, higher than unit mass resolution for certain analytes, and stable isotope labeled internal standards. The method was applied to the analysis of 55 samples collected from adult anonymous donors with no known exposure to the target pesticides. This efficient and cost-effective method is adequate to handle the large number of samples required for national biomonitoring surveys. Published by Elsevier B.V.
Analytical design of modified Smith predictor for unstable second-order processes with time delay
NASA Astrophysics Data System (ADS)
Ajmeri, Moina; Ali, Ahmad
2017-06-01
In this paper, a modified Smith predictor using three controllers, namely, stabilising (Gc), set-point tracking (Gc1), and load disturbance rejection (Gc2) controllers is proposed for second-order unstable processes with time delay. Controllers of the proposed structure are tuned using direct synthesis approach as this method enables the user to achieve a trade-off between the performance and robustness by adjusting a single design parameter. Furthermore, suitable values of the tuning parameters are recommended after studying their effect on the closed-loop performance and robustness. This is the main advantage of the proposed work over other recently published manuscripts, where authors provide only suitable ranges for the tuning parameters in spite of giving their suitable values. Simulation studies show that the proposed method results in satisfactory performance and improved robustness as compared to the recently reported control schemes. It is observed that the proposed scheme is able to work in the noisy environment also.
Validation of Rapid Radiochemical Method for Californium ...
Technical Brief In the event of a radiological/nuclear contamination event, the response community would need tools and methodologies to rapidly assess the nature and the extent of contamination. To characterize a radiologically contaminated outdoor area and to inform risk assessment, large numbers of environmental samples would be collected and analyzed over a short period of time. To address the challenge of quickly providing analytical results to the field, the U.S. EPA developed a robust analytical method. This method allows response officials to characterize contaminated areas and to assess the effectiveness of remediation efforts, both rapidly and accurately, in the intermediate and late phases of environmental cleanup. Improvement in sample processing and analysis leads to increased laboratory capacity to handle the analysis of a large number of samples following the intentional or unintentional release of a radiological/nuclear contaminant.
Jessen, Torben E; Höskuldsson, Agnar T; Bjerrum, Poul J; Verder, Henrik; Sørensen, Lars; Bratholm, Palle S; Christensen, Bo; Jensen, Lene S; Jensen, Maria A B
2014-09-01
Direct measurement of chemical constituents in complex biologic matrices without the use of analyte specific reagents could be a step forward toward the simplification of clinical biochemistry. Problems related to reagents such as production errors, improper handling, and lot-to-lot variations would be eliminated as well as errors occurring during assay execution. We describe and validate a reagent free method for direct measurement of six analytes in human plasma based on Fourier-transform infrared spectroscopy (FTIR). Blood plasma is analyzed without any sample preparation. FTIR spectrum of the raw plasma is recorded in a sampling cuvette specially designed for measurement of aqueous solutions. For each analyte, a mathematical calibration process is performed by a stepwise selection of wavelengths giving the optimal least-squares correlation between the measured FTIR signal and the analyte concentration measured by conventional clinical reference methods. The developed calibration algorithms are subsequently evaluated for their capability to predict the concentration of the six analytes in blinded patient samples. The correlation between the six FTIR methods and corresponding reference methods were 0.87
Shang, Tanya Q; Saati, Andrew; Toler, Kelly N; Mo, Jianming; Li, Heyi; Matlosz, Tonya; Lin, Xi; Schenk, Jennifer; Ng, Chee-Keng; Duffy, Toni; Porter, Thomas J; Rouse, Jason C
2014-07-01
A highly robust hydrophilic interaction liquid chromatography (HILIC) method that involves both fluorescence and mass spectrometric detection was developed for profiling and characterizing enzymatically released and 2-aminobenzamide (2-AB)-derivatized mAb N-glycans. Online HILIC/mass spectrometry (MS) with a quadrupole time-of-flight mass spectrometer provides accurate mass identifications of the separated, 2-AB-labeled N-glycans. The method features a high-resolution, low-shedding HILIC column with acetonitrile and water-based mobile phases containing trifluoroacetic acid (TFA) as a modifier. This column and solvent system ensures the combination of robust chromatographic performance and full compatibility and sensitivity with online MS in addition to the baseline separation of all typical mAb N-glycans. The use of TFA provided distinct advantages over conventional ammonium formate as a mobile phase additive, such as, optimal elution order for sialylated N-glycans, reproducible chromatographic profiles, and matching total ion current chromatograms, as well as minimal signal splitting, analyte adduction, and fragmentation during HILIC/MS, maximizing sensitivity for trace-level species. The robustness and selectivity of HILIC for N-glycan analyses allowed for method qualification. The method is suitable for bioprocess development activities, heightened characterization, and clinical drug substance release. Application of this HILIC/MS method to the detailed characterization of a marketed therapeutic mAb, Rituxan(®), is described. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Llorente-Mirandes, Toni; Rubio, Roser; López-Sánchez, José Fermín
2017-01-01
Here we review recent developments in analytical proposals for the assessment of inorganic arsenic (iAs) content in food products. Interest in the determination of iAs in products for human consumption such as food commodities, wine, and seaweed among others is fueled by the wide recognition of its toxic effects on humans, even at low concentrations. Currently, the need for robust and reliable analytical methods is recognized by various international safety and health agencies, and by organizations in charge of establishing acceptable tolerance levels of iAs in food. This review summarizes the state of the art of analytical methods while highlighting tools for the assessment of quality assessment of the results, such as the production and evaluation of certified reference materials (CRMs) and the availability of specific proficiency testing (PT) programmes. Because the number of studies dedicated to the subject of this review has increased considerably over recent years, the sources consulted and cited here are limited to those from 2010 to the end of 2015.
Chebrolu, Kranthi K; Yousef, Gad G; Park, Ryan; Tanimura, Yoshinori; Brown, Allan F
2015-09-15
A high-throughput, robust and reliable method for simultaneous analysis of five carotenoids, four chlorophylls and one tocopherol was developed for rapid screening large sample populations to facilitate molecular biology and plant breeding. Separation was achieved for 10 known analytes and four unknown carotenoids in a significantly reduced run time of 10min. Identity of the 10 analytes was confirmed by their UV-Vis absorption spectras. Quantification of tocopherol, carotenoids and chlorophylls was performed at 290nm, 460nm and 650nm respectively. In this report, two sub two micron particle core-shell columns, Kinetex from Phenomenex (1.7μm particle size, 12% carbon load) and Cortecs from Waters (1.6μm particle size, 6.6% carbon load) were investigated and their separation efficiencies were evaluated. The peak resolutions were >1.5 for all analytes except for chlorophyll-a' with Cortecs column. The ruggedness of this method was evaluated in two identical but separate instruments that produced CV<2 in peak retentions for nine out of 10 analytes separated. Copyright © 2015 Elsevier B.V. All rights reserved.
Approximate analytical relationships for linear optimal aeroelastic flight control laws
NASA Astrophysics Data System (ADS)
Kassem, Ayman Hamdy
1998-09-01
This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.
Workers' compensation costs among construction workers: a robust regression analysis.
Friedman, Lee S; Forst, Linda S
2009-11-01
Workers' compensation data are an important source for evaluating costs associated with construction injuries. We describe the characteristics of injured construction workers filing claims in Illinois between 2000 and 2005 and the factors associated with compensation costs using a robust regression model. In the final multivariable model, the cumulative percent temporary and permanent disability-measures of severity of injury-explained 38.7% of the variance of cost. Attorney costs explained only 0.3% of the variance of the dependent variable. The model used in this study clearly indicated that percent disability was the most important determinant of cost, although the method and uniformity of percent impairment allocation could be better elucidated. There is a need to integrate analytical methods that are suitable for skewed data when analyzing claim costs.
Lewis, Nathan S
2004-09-01
Arrays of broadly cross-reactive vapor sensors provide a man-made implementation of an olfactory system, in which an analyte elicits a response from many receptors and each receptor responds to a variety of analytes. Pattern recognition methods are then used to detect analytes based on the collective response of the sensor array. With the use of this architecture, arrays of chemically sensitive resistors made from composites of conductors and insulating organic polymers have been shown to robustly classify, identify, and quantify a diverse collection of organic vapors, even though no individual sensor responds selectively to a particular analyte. The properties and functioning of these arrays are inspired by advances in the understanding of biological olfaction, and in turn, evaluation of the performance of the man-made array provides suggestions regarding some of the fundamental odor detection principles of the mammalian olfactory system.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
Tsoi, Yeuk-Ki; Leung, Kelvin Sze-Yin
2011-04-22
This paper describes a novel application of tetrabutylammonium hydroxide-modified activated carbon (AC-TBAH) to the speciation of ultra-trace Se(IV) and Se(VI) using LC-ICP-DRC-MS. The anion exchange functionality was immobilized onto the AC surface enables selective preconcentration of inorganic Se anions in a wide range of working pHs. Simultaneous retention and elution of both analytes, followed by subsequent analysis with LC-ICP-DRC-MS, allows to accomplish speciation analysis in natural samples without complicated redox pre-treatment. The laboratory-made column of immobilized AC (0.4 g of sorbent packed in a 6 mL syringe barrel) has achieved analyte enrichment factors of 76 and 93, respectively, for Se(IV) and Se(VI), thus proving its superior preconcentration efficiency and selectivity over common AC. The considerable enhancement in sensitivity achieved by using the preconcentration column has improved the method's detection limits to 1.9-2.2 ng L(-1), which is a 100-fold improvement compared with direct injection. The analyte recoveries from heavily polluted river matrix were between 95.3 and 107.7% with less than 5.0% RSD. The robustness of the preconcentration and speciation method was validated by analysis of natural waters collected from rivers and reservoirs in Hong Kong. The modified AC material is hence presented as a low-cost yet robust substitute for conventional anion exchange resins for routine applications. Copyright © 2011 Elsevier B.V. All rights reserved.
Liquid chromatography method to determine polyamines in thermosetting polymers.
Dopico-García, M S; López-Vilariño, J M; Fernández-Martínez, G; González-Rodríguez, M V
2010-05-14
A simple, robust and sensitive analytical method to determine three polyamines commonly used as hardeners in epoxy resin systems and in the manufacture of polyurethane is reported. The studied polyamines are: one tetramine, TETA (triethylenetetramine), and two diamines, IPDA (Isophorone diamine) and TCD-diamine (4,7-methano-1H-indene-5,?-dimethanamine, octahydro-). The latter has an incompletely defined structure, and, as far as we know, has not been previously determined by other methods. All three polyamines contain primary amines; TETA also contains secondary amines. The analytical method involves derivatization with 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate, used for the first time for these compounds, followed by high performance liquid chromatography (HPLC) analysis with a fluorescence (FL) detector (lambda excitation 248nm, lambda emision 395nm). The HPLC-DAD-LTQ Orbitrap MS was used in order to provide structural information about the obtained derivatized compounds. The hybrid linear ion trap LTQ Orbitrap mass spectrometer has been introduced in recent years and provides a high mass accuracy. The structures of the derivatized analytes were identified from the protonated molecular ions [M+H](+) and corresponded to the fully labelled analytes. The following analytical parameters were determined for the method using the HPLC-FL: linearity, precision (2.5-10%), instrumental precision intraday (0.8-1.5%) and interday (2.9-6.3%), and detection limits (0.02-0.14mgL(-1)). The stability of stock solutions and derivatized compounds was also investigated. The method was applied to determine the amine free content in epoxy resin dust collected in workplaces. Copyright 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Carraro, F.; Valiani, A.; Caleffi, V.
2018-03-01
Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.
Fokker-Planck Equations of Stochastic Acceleration: A Study of Numerical Methods
NASA Astrophysics Data System (ADS)
Park, Brian T.; Petrosian, Vahe
1996-03-01
Stochastic wave-particle acceleration may be responsible for producing suprathermal particles in many astrophysical situations. The process can be described as a diffusion process through the Fokker-Planck equation. If the acceleration region is homogeneous and the scattering mean free path is much smaller than both the energy change mean free path and the size of the acceleration region, then the Fokker-Planck equation reduces to a simple form involving only the time and energy variables. in an earlier paper (Park & Petrosian 1995, hereafter Paper 1), we studied the analytic properties of the Fokker-Planck equation and found analytic solutions for some simple cases. In this paper, we study the numerical methods which must be used to solve more general forms of the equation. Two classes of numerical methods are finite difference methods and Monte Carlo simulations. We examine six finite difference methods, three fully implicit and three semi-implicit, and a stochastic simulation method which uses the exact correspondence between the Fokker-Planck equation and the it5 stochastic differential equation. As discussed in Paper I, Fokker-Planck equations derived under the above approximations are singular, causing problems with boundary conditions and numerical overflow and underflow. We evaluate each method using three sample equations to test its stability, accuracy, efficiency, and robustness for both time-dependent and steady state solutions. We conclude that the most robust finite difference method is the fully implicit Chang-Cooper method, with minor extensions to account for the escape and injection terms. Other methods suffer from stability and accuracy problems when dealing with some Fokker-Planck equations. The stochastic simulation method, although simple to implement, is susceptible to Poisson noise when insufficient test particles are used and is computationally very expensive compared to the finite difference method.
A robust interrupted time series model for analyzing complex health care intervention data.
Cruz, Maricela; Bender, Miriam; Ombao, Hernando
2017-12-20
Current health policy calls for greater use of evidence-based care delivery services to improve patient quality and safety outcomes. Care delivery is complex, with interacting and interdependent components that challenge traditional statistical analytic techniques, in particular, when modeling a time series of outcomes data that might be "interrupted" by a change in a particular method of health care delivery. Interrupted time series (ITS) is a robust quasi-experimental design with the ability to infer the effectiveness of an intervention that accounts for data dependency. Current standardized methods for analyzing ITS data do not model changes in variation and correlation following the intervention. This is a key limitation since it is plausible for data variability and dependency to change because of the intervention. Moreover, present methodology either assumes a prespecified interruption time point with an instantaneous effect or removes data for which the effect of intervention is not fully realized. In this paper, we describe and develop a novel robust interrupted time series (robust-ITS) model that overcomes these omissions and limitations. The robust-ITS model formally performs inference on (1) identifying the change point; (2) differences in preintervention and postintervention correlation; (3) differences in the outcome variance preintervention and postintervention; and (4) differences in the mean preintervention and postintervention. We illustrate the proposed method by analyzing patient satisfaction data from a hospital that implemented and evaluated a new nursing care delivery model as the intervention of interest. The robust-ITS model is implemented in an R Shiny toolbox, which is freely available to the community. Copyright © 2017 John Wiley & Sons, Ltd.
Goldberg, Tony L; Gillespie, Thomas R; Singer, Randall S
2006-09-01
Repetitive-element PCR (rep-PCR) is a method for genotyping bacteria based on the selective amplification of repetitive genetic elements dispersed throughout bacterial chromosomes. The method has great potential for large-scale epidemiological studies because of its speed and simplicity; however, objective guidelines for inferring relationships among bacterial isolates from rep-PCR data are lacking. We used multilocus sequence typing (MLST) as a "gold standard" to optimize the analytical parameters for inferring relationships among Escherichia coli isolates from rep-PCR data. We chose 12 isolates from a large database to represent a wide range of pairwise genetic distances, based on the initial evaluation of their rep-PCR fingerprints. We conducted MLST with these same isolates and systematically varied the analytical parameters to maximize the correspondence between the relationships inferred from rep-PCR and those inferred from MLST. Methods that compared the shapes of densitometric profiles ("curve-based" methods) yielded consistently higher correspondence values between data types than did methods that calculated indices of similarity based on shared and different bands (maximum correspondences of 84.5% and 80.3%, respectively). Curve-based methods were also markedly more robust in accommodating variations in user-specified analytical parameter values than were "band-sharing coefficient" methods, and they enhanced the reproducibility of rep-PCR. Phylogenetic analyses of rep-PCR data yielded trees with high topological correspondence to trees based on MLST and high statistical support for major clades. These results indicate that rep-PCR yields accurate information for inferring relationships among E. coli isolates and that accuracy can be enhanced with the use of analytical methods that consider the shapes of densitometric profiles.
Numerical realization of the variational method for generating self-trapped beams
NASA Astrophysics Data System (ADS)
Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.
2018-03-01
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Zhang, Shu-Xin; Chai, Xin-Sheng; Huang, Bo-Xi; Mai, Xiao-Xia
2015-08-07
Alkylphenol polyethoxylates (APEO), surfactants used in the production of textiles, have the potential to move from the fabric to the skin of the person wearing the clothes, posing an inherent risk of adverse health consequences. Therefore, the textile industry needs a fast, robust method for determining aqueous extractable APEO in fabrics. The currently-favored HPLC methods are limited by the presence of a mixture of analytes (due to the molecular weight distribution) and a lack of analytical standards for quantifying results. As a result, it has not been possible to reach consensus on a standard method for the determination of APEO in textiles. This paper addresses these limitations through the use of reaction-based head space-gas chromatography (HS-GC). Specifically, water is used to simulate body sweat and extract APEO. HI is then used to react the ethoxylate chains to depolymerize the chains into iodoethane that is quantified through HS-GC, providing an estimate of the average amount of APEO in the clothing. Data are presented to justify the optimal operating conditions; i.e., water extraction at 60°C for 1h and reaction with a specified amount of HI in the headspace vial at 135°C for 4h. The results show that the HS-GC method has good precision (RSD<10%) and good accuracy (recoveries from 95 to 106%) for the quantification of APEO content in textile and related materials. As such, the method should be a strong candidate to become a standard method for such determinations. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Noda, Isao
2014-07-01
Noteworthy experimental practices, which are advancing forward the frontiers of the field of two-dimensional (2D) correlation spectroscopy, are reviewed with the focus on various perturbation methods currently practiced to induce spectral changes, pertinent examples of applications in various fields, and types of analytical probes employed. Types of perturbation methods found in the published literature are very diverse, encompassing both dynamic and static effects. Although a sizable portion of publications report the use of dynamic perturbatuions, much greater number of studies employ static effect, especially that of temperature. Fields of applications covered by the literature are also very broad, ranging from fundamental research to practical applications in a number of physical, chemical and biological systems, such as synthetic polymers, composites and biomolecules. Aside from IR spectroscopy, which is the most commonly used tool, many other analytical probes are used in 2D correlation analysis. The ever expanding trend in depth, breadth and versatility of 2D correlation spectroscopy techniques and their broad applications all point to the robust and healthy state of the field.
Scheven, U M
2013-12-01
This paper describes a new variant of established stimulated echo pulse sequences, and an analytical method for determining diffusion or dispersion coefficients for Gaussian or non-Gaussian displacement distributions. The unipolar displacement encoding PFGSTE sequence uses trapezoidal gradient pulses of equal amplitude g and equal ramp rates throughout while sampling positive and negative halves of q-space. Usefully, the equal gradient amplitudes and gradient ramp rates help to reduce the impact of experimental artefacts caused by residual amplifier transients, eddy currents, or ferromagnetic hysteresis in components of the NMR magnet. The pulse sequence was validated with measurements of diffusion in water and of dispersion in flow through a packing of spheres. The analytical method introduced here permits the robust determination of the variance of non-Gaussian, dispersive displacement distributions. The noise sensitivity of the analytical method is shown to be negligible, using a demonstration experiment with a non-Gaussian longitudinal displacement distribution, measured on flow through a packing of mono-sized spheres. Copyright © 2013 Elsevier Inc. All rights reserved.
Janda, Joachim; Nödler, Karsten; Brauch, Heinz-Jürgen; Zwiener, Christian; Lange, Frank T
2018-03-19
A simple and robust analytical method for the determination of perfluorinated carboxylic acids (PFCAs) with C 2 to C 8 chains, based on solid-phase extraction (SPE) and liquid chromatography-tandem mass spectrometry (LC-MS/MS), was developed, validated and applied to tap water, groundwater and surface water. Two stationary phases for LC (Obelisc N and Kinetex C 18 ) and two materials with weak anion-exchange properties for SPE (Strata X-AW and Oasis WAX) were evaluated. Robust separation and retention was achieved with the reversed phase column and an acidic eluent. Quantitative extraction recoveries were generally achieved for PFCAs with C > 3, but extraction efficiencies were different for the two shortest chained analytes: 36 to 114% of perfluoropropanoate (PFPrA) and 14 to 99% of trifluoroacetate (TFA) were recovered with Strata X-AW, while 93 to 103% of PFPrA and 40 to 103% of TFA were recovered with Oasis WAX. The sample pH was identified as a key parameter in the extraction process. One-step elution-filtration was introduced in the workflow, in order to remove sorbent particles and minimise sample preparation steps. Validation resulted in limits of quantification for all PFCAs between 0.6 and 26 ng/L. Precision was between 0.7 and 15% and mean recoveries ranged from 83 to 107%. In groundwater samples from sites impacted by per- and polyfluoroalkyl substances (PFASs), PFCA concentrations ranged from 0.056 to 2.2 μg/L. TFA and perfluorooctanoate were the predominant analytes. TFA, however, revealed a more ubiquitous occurrence and was found in concentrations between 0.045 and 17 μg/L in drinking water, groundwater and surface water, which were not impacted by PFASs.
On the line-shape analysis of Compton profiles and its application to neutron scattering
NASA Astrophysics Data System (ADS)
Romanelli, G.; Krzystyniak, M.
2016-05-01
Analytical properties of Compton profiles are used in order to simplify the analysis of neutron Compton scattering experiments. In particular, the possibility to fit the difference of Compton profiles is discussed as a way to greatly decrease the level of complexity of the data treatment, making the analysis easier, faster and more robust. In the context of the novel method proposed, two mathematical models describing the shapes of differenced Compton profiles are discussed: the simple Gaussian approximation for harmonic and isotropic local potential, and an analytical Gauss-Hermite expansion for an anharmonic or anisotropic potential. The method is applied to data collected by VESUVIO spectrometer at ISIS neutron and muon pulsed source (UK) on Copper and Aluminium samples at ambient and low temperatures.
Ghambarian, Mahnaz; Behbahani, Mohammad; Esrafili, Ali; Sobhi, Hamid Reza
2017-09-01
Herein, an amino-based silica-coated nanomagnetic sorbent was applied for the effective extraction of two chlorophenoxyacetic acids (2-methyl-4-chlorophenoxyacetic acid and 2,4-dichlorophenoxyacetic acid) from various water samples. The sorbent was successfully synthesized and subsequently characterized by scanning electron microscopy, X-ray diffraction, and Fourier-transform infrared spectroscopy. The analytes were extracted by the sorbent mainly through ionic interactions. Once the extraction of analytes was completed, they were desorbed from the sorbent and detected by high-performance liquid chromatography with ultraviolet detection. A number of factors affecting the extraction and desorption of the analytes were investigated in detail and the optimum conditions were established. Under the optimum conditions, the calibration curves were linear over the concentration range of 1-250, and based on a signal-to-noise ratio of 3, the method detection limits were determined to be 0.5 μg/L for both analytes. Additionally, a preconcentration factor of 314 was achieved for the analytes. The average relative recoveries obtained from the fortified water samples varied in the range of 91-108% with relative standard deviations of 2.9-8.3%. Finally, the method was determined to be robust and effective for environmental water analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wirges, M; Funke, A; Serno, P; Knop, K; Kleinebudde, P
2013-05-05
Incorporation of an active pharmaceutical ingredient (API) into the coating layer of film-coated tablets is a method mainly used to formulate fixed-dose combinations. Uniform and precise spray-coating of an API represents a substantial challenge, which could be overcome by applying Raman spectroscopy as process analytical tool. In pharmaceutical industry, Raman spectroscopy is still mainly used as a bench top laboratory analytical method and usually not implemented in the production process. Concerning the application in the production process, a lot of scientific approaches stop at the level of feasibility studies and do not manage the step to production scale and process applications. The present work puts the scale up of an active coating process into focus, which is a step of highest importance during the pharmaceutical development. Active coating experiments were performed at lab and production scale. Using partial least squares (PLS), a multivariate model was constructed by correlating in-line measured Raman spectral data with the coated amount of API. By transferring this model, being implemented for a lab scale process, to a production scale process, the robustness of this analytical method and thus its applicability as a Process Analytical Technology (PAT) tool for the correct endpoint determination in pharmaceutical manufacturing could be shown. Finally, this method was validated according to the European Medicine Agency (EMA) guideline with respect to the special requirements of the applied in-line model development strategy. Copyright © 2013 Elsevier B.V. All rights reserved.
Fault Detection of Rotating Machinery using the Spectral Distribution Function
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1997-01-01
The spectral distribution function is introduced to characterize the process leading to faults in rotating machinery. It is shown to be a more robust indicator than conventional power spectral density estimates, but requires only slightly more computational effort. The method is illustrated with examples from seeded gearbox transmission faults and an analytical model of a defective bearing. Procedures are suggested for implementation in realistic environments.
Budnik, Lygia T; Fahrenholtz, Svea; Kloth, Stefan; Baur, Xaver
2010-04-01
Protection against infestation of a container cargo by alien species is achieved by mandatory fumigation with pesticides. Most of the effective fumigants are methyl and ethyl halide gases that are highly toxic and are a risk to both human health and the environment. There is a worldwide need for a reliable and robust analytical screening procedure for these volatile chemicals in a multitude of health and environmental scenarios. We have established a highly sensitive broad spectrum mass spectrometry method combined with thermal desorption gas chromatography to detect, identify and quantify volatile pesticide residues. Using this method, 1201 random ambient air samples taken from freight containers arriving at the biggest European ports of Hamburg and Rotterdam were analyzed over a period of two and a half years. This analytical procedure is a valuable strategy to measure air pollution from these hazardous chemicals, to help in the identification of pesticides in the new mixtures/formulations that are being adopted globally and to analyze expired breath samples after suspected intoxication in biomonitoring.
A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
Screening of 23 β-lactams in foodstuffs by LC-MS/MS using an alkaline QuEChERS-like extraction.
Bessaire, Thomas; Mujahid, Claudia; Beck, Andrea; Tarres, Adrienne; Savoy, Marie-Claude; Woo, Pei-Mun; Mottier, Pascal; Desmarchelier, Aurélien
2018-04-01
A fast and robust high performance LC-MS/MS screening method was developed for the analysis of β-lactam antibiotics in foods of animal origin: eggs, raw milk, processed dairy ingredients, infant formula, and meat- and fish-based products including baby foods. QuEChERS extraction with some adaptations enabled 23 drugs to be simultaneously monitored. Screening target concentrations were set at levels adequate to ensure compliance with current European, Chinese, US and Canadian regulations. The method was fully validated according to the European Community Reference Laboratories Residues Guidelines using 93 food samples of different composition. False-negative and false-positive rates were below 5% for all analytes. The method is adequate for use in high-routine laboratories. A 1-year study was additionally conducted to assess the stability of the 23 analytes in the working standard solution.
2013-01-01
Influenza virus-like particle vaccines are one of the most promising ways to respond to the threat of future influenza pandemics. VLPs are composed of viral antigens but lack nucleic acids making them non-infectious which limit the risk of recombination with wild-type strains. By taking advantage of the advancements in cell culture technologies, the process from strain identification to manufacturing has the potential to be completed rapidly and easily at large scales. After closely reviewing the current research done on influenza VLPs, it is evident that the development of quantification methods has been consistently overlooked. VLP quantification at all stages of the production process has been left to rely on current influenza quantification methods (i.e. Hemagglutination assay (HA), Single Radial Immunodiffusion assay (SRID), NA enzymatic activity assays, Western blot, Electron Microscopy). These are analytical methods developed decades ago for influenza virions and final bulk influenza vaccines. Although these methods are time-consuming and cumbersome they have been sufficient for the characterization of final purified material. Nevertheless, these analytical methods are impractical for in-line process monitoring because VLP concentration in crude samples generally falls out of the range of detection for these methods. This consequently impedes the development of robust influenza-VLP production and purification processes. Thus, development of functional process analytical techniques, applicable at every stage during production, that are compatible with different production platforms is in great need to assess, optimize and exploit the full potential of novel manufacturing platforms. PMID:23642219
Shurbaji, Maher; Abu Al Rub, Mohamad H; Saket, Munib M; Qaisi, Ali M; Salim, Maher L; Abu-Nameh, Eyad S M
2010-01-01
A rapid, simple, and sensitive RP-HPLC analytical method was developed for the simultaneous determination of triclabendazole and ivermectin in combination using a C18 RP column. The mobile phase was acetonitrile-methanol-water-acetic acid (56 + 36 + 7.5 + 0.5, v/v/v/v) at a pH of 4.35 and flow rate of 1.0 mL/min. A 245 nm UV detection wavelength was used. Complete validation, including linearity, accuracy, recovery, LOD, LOQ, precision, robustness, stability, and peak purity, was performed. The calibration curve was linear over the range 50.09-150.26 microg/mL for triclabendazole with r = 0.9999 and 27.01-81.02 microg/mL for ivermectin with r = 0.9999. Calculated LOD and LOQ for triclabendazole were 0.03 and 0.08 microg/mL, respectively, and for ivermectin 0.07 and 0.20 microg/mL, respectively. The intraday precision obtained was 98.71% with RSD of 0.87% for triclabendazole and 100.79% with RSD 0.73% for ivermectin. The interday precision obtained was 99.51% with RSD of 0.35% for triclabendazole and 100.55% with RSD of 0.59% for ivermectin. Robustness was also studied, and there was no significant variation of the system suitability of the analytical method with small changes in experimental parameters.
Recovering Galaxy Properties Using Gaussian Process SED Fitting
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Awan, Humna
2018-01-01
Information about physical quantities like the stellar mass, star formation rates, and ages for distant galaxies is contained in their spectral energy distributions (SEDs), obtained through photometric surveys like SDSS, CANDELS, LSST etc. However, noise in the photometric observations often is a problem, and using naive machine learning methods to estimate physical quantities can result in overfitting the noise, or converging on solutions that lie outside the physical regime of parameter space.We use Gaussian Process regression trained on a sample of SEDs corresponding to galaxies from a Semi-Analytic model (Somerville+15a) to estimate their stellar masses, and compare its performance to a variety of different methods, including simple linear regression, Random Forests, and k-Nearest Neighbours. We find that the Gaussian Process method is robust to noise and predicts not only stellar masses but also their uncertainties. The method is also robust in the cases where the distribution of the training data is not identical to the target data, which can be extremely useful when generalized to more subtle galaxy properties.
Speciated arsenic in air: measurement methodology and risk assessment considerations.
Lewis, Ari S; Reid, Kim R; Pollock, Margaret C; Campleman, Sharan L
2012-01-01
Accurate measurement of arsenic (As) in air is critical to providing a more robust understanding of arsenic exposures and associated human health risks. Although there is extensive information available on total arsenic in air, less is known on the relative contribution of each arsenic species. To address this data gap, the authors conducted an in-depth review of available information on speciated arsenic in air. The evaluation included the type of species measured and the relative abundance, as well as an analysis of the limitations of current analytical methods. Despite inherent differences in the procedures, most techniques effectively separated arsenic species in the air samples. Common analytical techniques such as inductively coupled plasma mass spectrometry (ICP-MS) and/or hydride generation (HG)- or quartz furnace (GF)-atomic absorption spectrometry (AAS) were used for arsenic measurement in the extracts, and provided some of the most sensitive detection limits. The current analysis demonstrated that, despite limited comparability among studies due to differences in seasonal factors, study duration, sample collection methods, and analytical methods, research conducted to date is adequate to show that arsenic in air is mainly in the inorganic form. Reported average concentrations of As(III) and As(V) ranged up to 7.4 and 10.4 ng/m3, respectively, with As(V) being more prevalent than As(III) in most studies. Concentrations of the organic methylated arsenic compounds are negligible (in the pg/m3 range). However because of the variability in study methods and measurement methodology, the authors were unable to determine the variation in arsenic composition as a function of source or particulate matter (PM) fraction. In this work, the authors include the implications of arsenic speciation in air on potential exposure and risks. The authors conclude that it is important to synchronize sample collection, preparation, and analytical techniques in order to generate data more useful for arsenic inhalation risk assessment, and a more robust documentation of quality assurance/quality control (QA/QC) protocols is necessary to ensure accuracy, precision, representativeness, and comparability.
Flow Cytometry: Evolution of Microbiological Methods for Probiotics Enumeration.
Pane, Marco; Allesina, Serena; Amoruso, Angela; Nicola, Stefania; Deidda, Francesca; Mogna, Luca
2018-05-14
The purpose of this trial was to verify that the analytical method ISO 19344:2015 (E)-IDF 232:2015 (E) is valid and reliable for quantifying the concentration of the probiotic Lactobacillus rhamnosus GG (ATCC 53103) in a finished product formulation. Flow cytometry assay is emerging as an alternative rapid method for microbial detection, enumeration, and population profiling. The use of flow cytometry not only permits the determination of viable cell counts but also allows for enumeration of damaged and dead cell subpopulations. Results are expressed as TFU (Total Fluorescent Units) and AFU (Active Fluorescent Units). In December 2015, the International Standard ISO 19344-IDF 232 "Milk and milk products-Starter cultures, probiotics and fermented products-Quantification of lactic acid bacteria by flow cytometry" was published. This particular ISO can be applied universally and regardless of the species of interest. Analytical method validation was conducted on 3 different industrial batches of L. rhamnosus GG according to USP39<1225>/ICH Q2R1 in term of: accuracy, precision (repeatability), intermediate precision (ruggedness), specificity, limit of quantification, linearity, range, robustness. The data obtained on the 3 batches of finished product have significantly demonstrated the validity and robustness of the cytofluorimetric analysis. On the basis of the results obtained, the ISO 19344:2015 (E)-IDF 232:2015 (E) "Quantification of lactic acid bacteria by flow cytometry" can be used for the enumeration of L. rhamnosus GG in a finished product formulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.
1990-08-01
This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less
NASA Astrophysics Data System (ADS)
Imai, Takashi; Ota, Kaiichiro; Aoyagi, Toshio
2017-02-01
Phase reduction has been extensively used to study rhythmic phenomena. As a result of phase reduction, the rhythm dynamics of a given system can be described using the phase response curve. Measuring this characteristic curve is an important step toward understanding a system's behavior. Recently, a basic idea for a new measurement method (called the multicycle weighted spike-triggered average method) was proposed. This paper confirms the validity of this method by providing an analytical proof and demonstrates its effectiveness in actual experimental systems by applying the method to an oscillating electric circuit. Some practical tips to use the method are also presented.
Intelligent failure-tolerant control
NASA Technical Reports Server (NTRS)
Stengel, Robert F.
1991-01-01
An overview of failure-tolerant control is presented, beginning with robust control, progressing through parallel and analytical redundancy, and ending with rule-based systems and artificial neural networks. By design or implementation, failure-tolerant control systems are 'intelligent' systems. All failure-tolerant systems require some degrees of robustness to protect against catastrophic failure; failure tolerance often can be improved by adaptivity in decision-making and control, as well as by redundancy in measurement and actuation. Reliability, maintainability, and survivability can be enhanced by failure tolerance, although each objective poses different goals for control system design. Artificial intelligence concepts are helpful for integrating and codifying failure-tolerant control systems, not as alternatives but as adjuncts to conventional design methods.
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
Galli, C
2001-07-01
It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.
Developing Uncertainty Models for Robust Flutter Analysis Using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Potter, Starr; Lind, Rick; Kehoe, Michael W. (Technical Monitor)
2001-01-01
A ground vibration test can be used to obtain information about structural dynamics that is important for flutter analysis. Traditionally, this information#such as natural frequencies of modes#is used to update analytical models used to predict flutter speeds. The ground vibration test can also be used to obtain uncertainty models, such as natural frequencies and their associated variations, that can update analytical models for the purpose of predicting robust flutter speeds. Analyzing test data using the -norm, rather than the traditional 2-norm, is shown to lead to a minimum-size uncertainty description and, consequently, a least-conservative robust flutter speed. This approach is demonstrated using ground vibration test data for the Aerostructures Test Wing. Different norms are used to formulate uncertainty models and their associated robust flutter speeds to evaluate which norm is least conservative.
New robust bilinear least squares method for the analysis of spectral-pH matrix data.
Goicoechea, Héctor C; Olivieri, Alejandro C
2005-07-01
A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.
Olofsson, Madelen A; Bylund, Dan
2015-10-01
A liquid chromatography with electrospray ionization mass spectrometry method was developed to quantitatively and qualitatively analyze 13 hydroxamate siderophores (ferrichrome, ferrirubin, ferrirhodin, ferrichrysin, ferricrocin, ferrioxamine B, D1 , E and G, neocoprogen I and II, coprogen and triacetylfusarinine C). Samples were preconcentrated on-line by a switch-valve setup prior to analyte separation on a Kinetex C18 column. Gradient elution was performed using a mixture of an ammonium formate buffer and acetonitrile. Total analysis time including column conditioning was 20.5 min. Analytes were fragmented by applying collision-induced dissociation, enabling structural identification by tandem mass spectrometry. Limit of detection values for the selected ion monitoring method ranged from 71 pM to 1.5 nM with corresponding values of two to nine times higher for the multiple reaction monitoring method. The liquid chromatography with mass spectrometry method resulted in a robust and sensitive quantification of hydroxamate siderophores as indicated by retention time stability, linearity, sensitivity, precision and recovery. The analytical error of the methods, assessed through random-order, duplicate analysis of soil samples extracted with a mixture of 10 mM phosphate buffer and methanol, appears negligible in relation to between-sample variations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Simplex-stochastic collocation method with improved scalability
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Dwight, R. P.; Cinnella, P.
2016-04-01
The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.
Jeong, Yeong Ran; Kim, Sun Young; Park, Young Sam; Lee, Gyun Min
2018-03-21
N-glycans of therapeutic glycoproteins are critical quality attributes that should be monitored throughout all stages of biopharmaceutical development. To reduce both the time for sample preparation and the variations in analytical results, we have developed an N-glycan analysis method that includes improved 2-aminobenzoic acid (2-AA) labeling to easily remove deglycosylated proteins. Using this analytical method, 15 major 2-AA-labeled N-glycans of Enbrel ® were separated into single peaks in hydrophilic interaction chromatography mode and therefore could be quantitated. 2-AA-labeled N-glycans were also highly compatible with in-line quadrupole time-of-flight mass spectrometry (MS) for structural identification. The structures of 15 major and 18 minor N-glycans were identified from their mass values determined by quadrupole time-of-flight MS. Furthermore, the structures of 14 major N-glycans were confirmed by interpreting the MS/MS data of each N-glycan. This analytical method was also successfully applied to neutral N-glycans of Humira ® and highly sialylated N-glycans of NESP ® . Furthermore, the analysis data of Enbrel ® that were accumulated for 2.5 years demonstrated the high-level consistency of this analytical method. Taken together, the results show that a wide repertoire of N-glycans of therapeutic glycoproteins can be analyzed with high efficiency and consistency using the improved 2-AA labeling-based N-glycan analysis method. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Zhang, Zhiyong; Zhao, Dishun; Xu, Baoyun
2013-01-01
A simple and rapid method is described for the analysis of glyoxal and related substances by high-performance liquid chromatography with a refractive index detector. The following chromatographic conditions were adopted: Aminex HPX-87H column, mobile phase consisting of 0.01N H2SO4, flow rate of 0.8 mL/min and temperature of 65°C. The application of the analytical technique developed in this study demonstrated that the aqueous reaction mixture produced by the oxidation of acetaldehyde with HNO3 was composed of glyoxal, acetaldehyde, acetic acid, formic acid, glyoxylic acid, oxalic acid, butanedione and glycolic acid. The method was validated by evaluating analytical parameters such as linearity, limits of detection and quantification, precision, recovery and robustness. The proposed methodology was successfully applied to the production of glyoxal.
The induction of mycotoxins by trichothecene producing Fusarium species.
Lowe, Rohan; Jubault, Mélanie; Canning, Gail; Urban, Martin; Hammond-Kosack, Kim E
2012-01-01
In recent years, many Fusarium species have emerged which now threaten the productivity and safety of small grain cereal crops worldwide. During floral infection and post-harvest on stored grains the Fusarium hyphae produce various types of harmful mycotoxins which subsequently contaminate food and feed products. This article focuses specifically on the induction and production of the type B sesquiterpenoid trichothecene mycotoxins. Methods are described which permit in liquid culture the small or large scale production and detection of deoxynivalenol (DON) and its various acetylated derivatives. A wheat (Triticum aestivum L.) ear inoculation assay is also explained which allows the direct comparison of mycotoxin production by species, chemotypes and strains with different growth rates and/or disease-causing abilities. Each of these methods is robust and can be used for either detailed time-course studies or end-point analyses. Various analytical methods are available to quantify the levels of DON, 3A-DON and 15A-DON. Some criteria to be considered when making selections between the different analytical methods available are briefly discussed.
Approximating natural connectivity of scale-free networks based on largest eigenvalue
NASA Astrophysics Data System (ADS)
Tan, S.-Y.; Wu, J.; Li, M.-J.; Lu, X.
2016-06-01
It has been recently proposed that natural connectivity can be used to efficiently characterize the robustness of complex networks. The natural connectivity has an intuitive physical meaning and a simple mathematical formulation, which corresponds to an average eigenvalue calculated from the graph spectrum. However, as a network model close to the real-world system that widely exists, the scale-free network is found difficult to obtain its spectrum analytically. In this article, we investigate the approximation of natural connectivity based on the largest eigenvalue in both random and correlated scale-free networks. It is demonstrated that the natural connectivity of scale-free networks can be dominated by the largest eigenvalue, which can be expressed asymptotically and analytically to approximate natural connectivity with small errors. Then we show that the natural connectivity of random scale-free networks increases linearly with the average degree given the scaling exponent and decreases monotonically with the scaling exponent given the average degree. Moreover, it is found that, given the degree distribution, the more assortative a scale-free network is, the more robust it is. Experiments in real networks validate our methods and results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekechukwu, A
Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validatemore » analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.« less
Anumol, Tarun; Lehotay, Steven J; Stevens, Joan; Zweigenbaum, Jerry
2017-04-01
Veterinary drug residues in animal-derived foods must be monitored to ensure food safety, verify proper veterinary practices, enforce legal limits in domestic and imported foods, and for other purposes. A common goal in drug residue analysis in foods is to achieve acceptable monitoring results for as many analytes as possible, with higher priority given to the drugs of most concern, in an efficient and robust manner. The U.S. Department of Agriculture has implemented a multiclass, multi-residue method based on sample preparation using dispersive solid phase extraction (d-SPE) for cleanup and ultrahigh-performance liquid chromatography-tandem quadrupole mass spectrometry (UHPLC-QQQ) for analysis of >120 drugs at regulatory levels of concern in animal tissues. Recently, a new cleanup product called "enhanced matrix removal for lipids" (EMR-L) was commercially introduced that used a unique chemical mechanism to remove lipids from extracts. Furthermore, high-resolution quadrupole-time-of-flight (Q/TOF) for (U)HPLC detection often yields higher selectivity than targeted QQQ analyzers while allowing retroactive processing of samples for other contaminants. In this study, the use of both d-SPE and EMR-L sample preparation and UHPLC-QQQ and UHPLC-Q/TOF analysis methods for shared spiked samples of bovine muscle, kidney, and liver was compared. The results showed that the EMR-L method provided cleaner extracts overall and improved results for several anthelmintics and tranquilizers compared to the d-SPE method, but the EMR-L method gave lower recoveries for certain β-lactam antibiotics. QQQ vs. Q/TOF detection showed similar mixed performance advantages depending on analytes and matrix interferences, with an advantage to Q/TOF for greater possible analytical scope and non-targeted data collection. Either combination of approaches may be used to meet monitoring purposes, with an edge in efficiency to d-SPE, but greater instrument robustness and less matrix effects when analyzing EMR-L extracts. Graphical abstract Comparison of cleanup methods in the analysis of veterinary drug residues in bovine tissues.
Hanson, Jeffery A; Yang, Haw
2008-11-06
The statistical properties of the cross correlation between two time series has been studied. An analytical expression for the cross correlation function's variance has been derived. On the basis of these results, a statistically robust method has been proposed to detect the existence and determine the direction of cross correlation between two time series. The proposed method has been characterized by computer simulations. Applications to single-molecule fluorescence spectroscopy are discussed. The results may also find immediate applications in fluorescence correlation spectroscopy (FCS) and its variants.
Numerical realization of the variational method for generating self-trapped beams.
Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A
2018-03-19
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
General method for extracting the quantum efficiency of dispersive qubit readout in circuit QED
NASA Astrophysics Data System (ADS)
Bultink, C. C.; Tarasinski, B.; Haandbæk, N.; Poletto, S.; Haider, N.; Michalak, D. J.; Bruno, A.; DiCarlo, L.
2018-02-01
We present and demonstrate a general three-step method for extracting the quantum efficiency of dispersive qubit readout in circuit QED. We use active depletion of post-measurement photons and optimal integration weight functions on two quadratures to maximize the signal-to-noise ratio of the non-steady-state homodyne measurement. We derive analytically and demonstrate experimentally that the method robustly extracts the quantum efficiency for arbitrary readout conditions in the linear regime. We use the proven method to optimally bias a Josephson traveling-wave parametric amplifier and to quantify different noise contributions in the readout amplification chain.
Optimization and automation of quantitative NMR data extraction.
Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos
2013-06-18
NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.
Suppressing spectral diffusion of emitted photons with optical pulses
Fotso, H. F.; Feiguin, A. E.; Awschalom, D. D.; ...
2016-01-22
In many quantum architectures the solid-state qubits, such as quantum dots or color centers, are interfaced via emitted photons. However, the frequency of photons emitted by solid-state systems exhibits slow uncontrollable fluctuations over time (spectral diffusion), creating a serious problem for implementation of the photon-mediated protocols. Here we show that a sequence of optical pulses applied to the solid-state emitter can stabilize the emission line at the desired frequency. We demonstrate efficiency, robustness, and feasibility of the method analytically and numerically. Taking nitrogen-vacancy center in diamond as an example, we show that only several pulses, with the width of 1more » ns, separated by few ns (which is not difficult to achieve) can suppress spectral diffusion. As a result, our method provides a simple and robust way to greatly improve the efficiency of photon-mediated entanglement and/or coupling to photonic cavities for solid-state qubits.« less
Day, Jason A; Montes-Bayón, María; Vonderheide, Anne P; Caruso, Joseph A
2002-08-01
Regulating arsenic species in drinking waters is a reasonable objective, since the various species have different toxicological impacts. However, developing robust and sensitive speciation methods is mandatory prior to any such regulations. Numerous arsenic speciation publications exist, but the question of robustness or ruggedness for a regulatory method has not been fully explored. The present work illustrates the use of anion exchange chromatography coupled to ICP-MS with a commercially available "speciation kit" option. The mobile phase containing 2 mM NaH(2)PO(4) and 0.2 mM EDTA at pH 6 allowed adequate separation of four As species (As(III), As(V), MMAA, DMAA) in less than 10 min. The analytical performance characteristics studied, including method detection limits (lower than 100 ng L(-1) for all the species evaluated), proved the suitability of the method to fulfill the current regulation. Other parameters evaluated such as laboratory fortified blanks, spiked recoveries, and reproducibility over a certain period of time produced adequate results. The samples analyzed were taken from water utilities in different areas of the United States and were provided by the U.S. EPA. The data suggests the speciation setup performs to U.S. EPA specifications but sample treatment and chemistry are also important factors for achieving good recoveries for samples spiked with As(III) as arsenite and As(V) as arsenate.
Comparison of beam position calculation methods for application in digital acquisition systems
NASA Astrophysics Data System (ADS)
Reiter, A.; Singh, R.
2018-05-01
Different approaches to the data analysis of beam position monitors in hadron accelerators are compared adopting the perspective of an analog-to-digital converter in a sampling acquisition system. Special emphasis is given to position uncertainty and robustness against bias and interference that may be encountered in an accelerator environment. In a time-domain analysis of data in the presence of statistical noise, the position calculation based on the difference-over-sum method with algorithms like signal integral or power can be interpreted as a least-squares analysis of a corresponding fit function. This link to the least-squares method is exploited in the evaluation of analysis properties and in the calculation of position uncertainty. In an analytical model and experimental evaluations the positions derived from a straight line fit or equivalently the standard deviation are found to be the most robust and to offer the least variance. The measured position uncertainty is consistent with the model prediction in our experiment, and the results of tune measurements improve significantly.
Fair, Justin D.; Bailey, William F.; Felty, Robert A.; Gifford, Amy E.; Shultes, Benjamin; Volles, Leslie H.
2010-01-01
Development of a robust reliable technique that permits for the rapid quantitation of volatile organic chemicals is an important first step to remediation associated with vapor intrusion. This paper describes the development of an analytical method that allows for the rapid and precise identification and quantitation of halogenated and nonhalogenated contaminants commonly found within the ppbv level at sites where vapor intrusion is a concern. PMID:20885969
NASA Astrophysics Data System (ADS)
Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling
2017-11-01
Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.
SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal
Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.
2017-01-01
Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758
The Influence of Judgment Calls on Meta-Analytic Findings.
Tarrahi, Farid; Eisend, Martin
2016-01-01
Previous research has suggested that judgment calls (i.e., methodological choices made in the process of conducting a meta-analysis) have a strong influence on meta-analytic findings and question their robustness. However, prior research applies case study comparison or reanalysis of a few meta-analyses with a focus on a few selected judgment calls. These studies neglect the fact that different judgment calls are related to each other and simultaneously influence the outcomes of a meta-analysis, and that meta-analytic findings can vary due to non-judgment call differences between meta-analyses (e.g., variations of effects over time). The current study analyzes the influence of 13 judgment calls in 176 meta-analyses in marketing research by applying a multivariate, multilevel meta-meta-analysis. The analysis considers simultaneous influences from different judgment calls on meta-analytic effect sizes and controls for alternative explanations based on non-judgment call differences between meta-analyses. The findings suggest that judgment calls have only a minor influence on meta-analytic findings, whereas non-judgment call differences between meta-analyses are more likely to explain differences in meta-analytic findings. The findings support the robustness of meta-analytic results and conclusions.
Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2010-01-01
How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.
Passman, Dina B.
2013-01-01
Objective The objective of this demonstration is to show conference attendees how they can integrate, analyze, and visualize diverse data type data from across a variety of systems by leveraging an off-the-shelf enterprise business intelligence (EBI) solution to support decision-making in disasters. Introduction Fusion Analytics is the data integration system developed by the Fusion Cell at the U.S. Department of Health and Human Services (HHS), Office of the Assistant Secretary for Preparedness and Response (ASPR). Fusion Analytics meaningfully augments traditional public and population health surveillance reporting by providing web-based data analysis and visualization tools. Methods Fusion Analytics serves as a one-stop-shop for the web-based data visualizations of multiple real-time data sources within ASPR. The 24-7 web availability makes it an ideal analytic tool for situational awareness and response allowing stakeholders to access the portal from any internet-enabled device without installing any software. The Fusion Analytics data integration system was built using off-the-shelf EBI software. Fusion Analytics leverages the full power of statistical analysis software and delivers reports to users in a secure web-based environment. Fusion Analytics provides an example of how public health staff can develop and deploy a robust public health informatics solution using an off-the shelf product and with limited development funding. It also provides the unique example of a public health information system that combines patient data for traditional disease surveillance with manpower and resource data to provide overall decision support for federal public health and medical disaster response operations. Conclusions We are currently in a unique position within public health. One the one hand, we have been gaining greater and greater access to electronic data of all kinds over the last few years. On the other, we are working in a time of reduced government spending to support leveraging this data for decision support with robust analytics and visualizations. Fusion Analytics provides an opportunity for attendees to see how various types of data are integrated into a single application for population health decision support. It also can provide them with ideas of how they can use their own staff to create analyses and reports that support their public health activities.
Robust regression for large-scale neuroimaging studies.
Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand
2015-05-01
Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Panetta, Robert J; Jahren, A Hope
2011-05-30
Gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS) is increasingly applied to food and metabolic studies for stable isotope analysis (δ(13) C), with the quantification of analyte concentration often obtained via a second alternative method. We describe a rapid direct transesterification of triacylglycerides (TAGs) for fatty acid methyl ester (FAME) analysis by GC-C-IRMS demonstrating robust simultaneous quantification of amount of analyte (mean r(2) =0.99, accuracy ±2% for 37 FAMEs) and δ(13) C (±0.13‰) in a single analytical run. The maximum FAME yield and optimal δ(13) C values are obtained by derivatizing with 10% (v/v) acetyl chloride in methanol for 1 h, while lower levels of acetyl chloride and shorter reaction times skewed the δ(13) C values by as much as 0.80‰. A Bland-Altman evaluation of the GC-C-IRMS measurements resulted in excellent agreement for pure oils (±0.08‰) and oils extracted from French fries (±0.49‰), demonstrating reliable simultaneous quantification of FAME concentration and δ(13) C values. Thus, we conclude that for studies requiring both the quantification of analyte and δ(13) C data, such as authentication or metabolic flux studies, GC-C-IRMS can be used as the sole analytical method. Copyright © 2011 John Wiley & Sons, Ltd.
Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.
Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J
2017-12-01
Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.
2018-01-01
A simple, sensitive, accurate, robust headspace gas chromatographic method was developed for the quantitative determination of acetone and isopropyl alcohol in tartaric acid-based pellets of dipyridamole modified release capsules. The residual solvents acetone and isopropyl alcohol were used in the manufacturing process of the tartaric acid-based pellets of dipyridamole modified release capsules by considering the solubility of the dipyridamole and excipients in the different manufacturing stages. The method was developed and optimized by using fused silica DB-624 (30 m × 0.32 mm × 1.8 µm) column with the flame ionization detector. The method validation was carried out with regard to the guidelines for validation of analytical procedures Q2 demanded by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). All the validation characteristics were meeting the acceptance criteria. Hence, the developed and validated method can be applied for the intended routine analysis. PMID:29686931
Valavala, Sriram; Seelam, Nareshvarma; Tondepu, Subbaiah; Jagarlapudi, V Shanmukha Kumar; Sundarmurthy, Vivekanandan
2018-01-01
A simple, sensitive, accurate, robust headspace gas chromatographic method was developed for the quantitative determination of acetone and isopropyl alcohol in tartaric acid-based pellets of dipyridamole modified release capsules. The residual solvents acetone and isopropyl alcohol were used in the manufacturing process of the tartaric acid-based pellets of dipyridamole modified release capsules by considering the solubility of the dipyridamole and excipients in the different manufacturing stages. The method was developed and optimized by using fused silica DB-624 (30 m × 0.32 mm × 1.8 µ m) column with the flame ionization detector. The method validation was carried out with regard to the guidelines for validation of analytical procedures Q2 demanded by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). All the validation characteristics were meeting the acceptance criteria. Hence, the developed and validated method can be applied for the intended routine analysis.
Advances in Molecular Rotational Spectroscopy for Applied Science
NASA Astrophysics Data System (ADS)
Harris, Brent; Fields, Shelby S.; Pulliam, Robin; Muckle, Matt; Neill, Justin L.
2017-06-01
Advances in chemical sensitivity and robust, solid-state designs for microwave/millimeter-wave instrumentation compel the expansion of molecular rotational spectroscopy as research tool into applied science. It is familiar to consider molecular rotational spectroscopy for air analysis. Those techniques for molecular rotational spectroscopy are included in our presentation of a more broad application space for materials analysis using Fourier Transform Molecular Rotational Resonance (FT-MRR) spectrometers. There are potentially transformative advantages for direct gas analysis of complex mixtures, determination of unknown evolved gases with parts per trillion detection limits in solid materials, and unambiguous chiral determination. The introduction of FT-MRR as an alternative detection principle for analytical chemistry has created a ripe research space for the development of new analytical methods and sampling equipment to fully enable FT-MRR. We present the current state of purpose-built FT-MRR instrumentation and the latest application measurements that make use of new sampling methods.
Nurhuda, M; Rouf, A
2017-09-01
The paper presents a method for simultaneous computation of eigenfunction and eigenvalue of the stationary Schrödinger equation on a grid, without imposing boundary-value condition. The method is based on the filter operator, which selects the eigenfunction from wave packet at the rate comparable to δ function. The efficacy and reliability of the method are demonstrated by comparing the simulation results with analytical or numerical solutions obtained by using other methods for various boundary-value conditions. It is found that the method is robust, accurate, and reliable. Further prospect of filter method for simulation of the Schrödinger equation in higher-dimensional space will also be highlighted.
Analysis of short-chain fatty acids in human feces: A scoping review.
Primec, Maša; Mičetić-Turk, Dušanka; Langerholc, Tomaž
2017-06-01
Short-chain fatty acids (SCFAs) play a crucial role in maintaining homeostasis in humans, therefore the importance of a good and reliable SCFAs analytical detection has raised a lot in the past few years. The aim of this scoping review is to show the trends in the development of different methods of SCFAs analysis in feces, based on the literature published in the last eleven years in all major indexing databases. The search criteria included analytical quantification techniques of SCFAs in different human clinical and in vivo studies. SCFAs analysis is still predominantly performed using gas chromatography (GC), followed by high performance liquid chromatography (HPLC), nuclear magnetic resonance (NMR) and capillary electrophoresis (CE). Performances, drawbacks and advantages of these methods are discussed, especially in the light of choosing a proper pretreatment, as feces is a complex biological material. Further optimization to develop a simple, cost effective and robust method for routine use is needed. Copyright © 2017 Elsevier Inc. All rights reserved.
D'Alessandro, Angelo; Gevi, Federica; Zolla, Lello
2011-04-01
Recent advancements in the field of omics sciences have paved the way for further expansion of metabolomics. Originally tied to NMR spectroscopy, metabolomic disciplines are constantly and growingly involving HPLC and mass spectrometry (MS)-based analytical strategies and, in this context, we hereby propose a robust and efficient extraction protocol for metabolites from four different biological sources which are subsequently analysed, identified and quantified through high resolution reversed-phase fast HPLC and mass spectrometry. To this end, we demonstrate the elevated intra- and inter-day technical reproducibility, ease of an MRM-based MS method, allowing simultaneous detection of up to 10 distinct features, and robustness of multiple metabolite detection and quantification in four different biological samples. This strategy might become routinely applicable to various samples/biological matrices, especially for low-availability ones. In parallel, we compare the present strategy for targeted detection of a representative metabolite, L-glutamic acid, with our previously-proposed chemical-derivatization through dansyl chloride. A direct comparison of the present method against spectrophotometric assays is proposed as well. An application of the proposed method is also introduced, using the SAOS-2 cell line, either induced or non-induced to express the TAp63 isoform of the p63 gene, as a model for determination of variations of glutamate concentrations.
Quantitative Examination of Corrosion Damage by Means of Thermal Response Measurements
NASA Technical Reports Server (NTRS)
Rajic, Nik
1998-01-01
Two computational methods are presented that enable a characterization of corrosion damage to be performed from thermal response measurements derived from a standard flash thermographic inspection. The first is based upon a one dimensional analytical solution to the heat diffusion equation and presumes the lateral extent of damage is large compared to the residual structural thickness, such that lateral heat diffusion effects can be considered insignificant. The second proposed method, based on a finite element optimization scheme, addresses the more general case where these conditions are not met. Results from an experimental application are given to illustrate the precision, robustness and practical efficacy of both methods.
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...
2016-10-18
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
Adaptive Control for Autonomous Navigation of Mobile Robots Considering Time Delay and Uncertainty
NASA Astrophysics Data System (ADS)
Armah, Stephen Kofi
Autonomous control of mobile robots has attracted considerable attention of researchers in the areas of robotics and autonomous systems during the past decades. One of the goals in the field of mobile robotics is development of platforms that robustly operate in given, partially unknown, or unpredictable environments and offer desired services to humans. Autonomous mobile robots need to be equipped with effective, robust and/or adaptive, navigation control systems. In spite of enormous reported work on autonomous navigation control systems for mobile robots, achieving the goal above is still an open problem. Robustness and reliability of the controlled system can always be improved. The fundamental issues affecting the stability of the control systems include the undesired nonlinear effects introduced by actuator saturation, time delay in the controlled system, and uncertainty in the model. This research work develops robustly stabilizing control systems by investigating and addressing such nonlinear effects through analytical, simulations, and experiments. The control systems are designed to meet specified transient and steady-state specifications. The systems used for this research are ground (Dr Robot X80SV) and aerial (Parrot AR.Drone 2.0) mobile robots. Firstly, an effective autonomous navigation control system is developed for X80SV using logic control by combining 'go-to-goal', 'avoid-obstacle', and 'follow-wall' controllers. A MATLAB robot simulator is developed to implement this control algorithm and experiments are conducted in a typical office environment. The next stage of the research develops an autonomous position (x, y, and z) and attitude (roll, pitch, and yaw) controllers for a quadrotor, and PD-feedback control is used to achieve stabilization. The quadrotor's nonlinear dynamics and kinematics are implemented using MATLAB S-function to generate the state output. Secondly, the white-box and black-box approaches are used to obtain a linearized second-order altitude models for the quadrotor, AR.Drone 2.0. Proportional (P), pole placement or proportional plus velocity (PV), linear quadratic regulator (LQR), and model reference adaptive control (MRAC) controllers are designed and validated through simulations using MATLAB/Simulink. Control input saturation and time delay in the controlled systems are also studied. MATLAB graphical user interface (GUI) and Simulink programs are developed to implement the controllers on the drone. Thirdly, the time delay in the drone's control system is estimated using analytical and experimental methods. In the experimental approach, the transient properties of the experimental altitude responses are compared to those of simulated responses. The analytical approach makes use of the Lambert W function to obtain analytical solutions of scalar first-order delay differential equations (DDEs). A time-delayed P-feedback control system (retarded type) is used in estimating the time delay. Then an improved system performance is obtained by incorporating the estimated time delay in the design of the PV control system (neutral type) and PV-MRAC control system. Furthermore, the stability of a parametric perturbed linear time-invariant (LTI) retarded-type system is studied. This is done by analytically calculating the stability radius of the system. Simulation of the control system is conducted to confirm the stability. This robust control design and uncertainty analysis are conducted for first-order and second-order quadrotor models. Lastly, the robustly designed PV and PV-MRAC control systems are used to autonomously track multiple waypoints. Also, the robustness of the PV-MRAC controller is tested against a baseline PV controller using the payload capability of the drone. It is shown that the PV-MRAC offers several benefits over the fixed-gain approach of the PV controller. The adaptive control is found to offer enhanced robustness to the payload fluctuations.
Orlandini, Serena; Pasquini, Benedetta; Caprini, Claudia; Del Bubba, Massimo; Pinzauti, Sergio; Furlanetto, Sandra
2015-11-01
A fast and selective CE method for the determination of zolmitriptan (ZOL) and its five potential impurities has been developed applying the analytical Quality by Design principles. Voltage, temperature, buffer concentration, and pH were investigated as critical process parameters that can influence the critical quality attributes, represented by critical resolution values between peak pairs, analysis time, and peak efficiency of ZOL-dimer. A symmetric screening matrix was employed for investigating the knowledge space, and a Box-Behnken design was used to evaluate the main, interaction, and quadratic effects of the critical process parameters on the critical quality attributes. Contour plots were drawn highlighting important interactions between buffer concentration and pH, and the gained information was merged into the sweet spot plots. Design space (DS) was established by the combined use of response surface methodology and Monte Carlo simulations, introducing a probability concept and thus allowing the quality of the analytical performances to be assured in a defined domain. The working conditions (with the interval defining the DS) were as follows: BGE, 138 mM (115-150 mM) phosphate buffer pH 2.74 (2.54-2.94); temperature, 25°C (24-25°C); voltage, 30 kV. A control strategy was planned based on method robustness and system suitability criteria. The main advantages of applying the Quality by Design concept consisted of a great increase of knowledge of the analytical system, obtained throughout multivariate techniques, and of the achievement of analytical assurance of quality, derived by probability-based definition of DS. The developed method was finally validated and applied to the analysis of ZOL tablets. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2015-11-01
issues and made some of the same distinctions (Walker, Lempert, and Kwakkel ; Bankes, Lempert, and Popper, 2005), but it did appear that we had more than...Statistics,” British Journal of Mathematical Statistics and Pscyhology, 66, pp. 8-38. Jaynes, Edwin T., and G . Larry Bretthorst (ed.) (2003) , Probability...Giroux. Lempert, Robert J., David G . Groves, Steven W. Popper, and Steven C. Bankes (2006), “A General Analytic Method for Generating Robust
Modeling heading and path perception from optic flow in the case of independently moving objects
Raudies, Florian; Neumann, Heiko
2013-01-01
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589
Insa, S; Anticó, E; Ferreira, V
2005-09-30
A reliable solid-phase extraction (SPE) method for the simultaneous determination of 2,4,6-trichloroanisole (TCA) and 2,4,6-tribromoanisole (TBA) in wines has been developed. In the proposed procedure 50 mL of wine are extracted in a 1 mL cartridge filled with 50 mg of LiChrolut EN resins. Most wine volatiles are washed up with 12.5 mL of a water:methanol solution (70%, v/v) containing 1% of NaHCO3. Analytes are further eluted with 0.6 mL of dichloromethane. A 40 microL aliquot of this extract is directly injected into a PTV injector operated in the solvent split mode, and analysed by gas chromatography (GC)-ion trap mass spectrometry using the selected ion storage mode. The solid-phase extraction, including sample volume and rinsing and elution solvents, and the large volume GC injection have been carefully evaluated and optimized. The resulting method is precise (RSD (%) < 6% at 100 ng L(-1)), sensitive (LOD were 0.2 and 0.4 ng/L for TCA and TBA, respectively), robust (the absolute recoveries of both analytes are higher than 80% and consistent wine to wine) and friendly to the GC-MS system (the extract is clean, simple and free from non-volatiles).
Gerace, E; Salomone, A; Abbadessa, G; Racca, S; Vincenti, M
2012-02-01
A fast screening protocol was developed for the simultaneous determination of nine anti-estrogenic agents (aminoglutethimide, anastrozole, clomiphene, drostanolone, formestane, letrozole, mesterolone, tamoxifen, testolactone) plus five of their metabolites in human urine. After an enzymatic hydrolysis, these compounds can be extracted simultaneously from urine with a simple liquid-liquid extraction at alkaline conditions. The analytes were subsequently analyzed by fast-gas chromatography/mass spectrometry (fast-GC/MS) after derivatization. The use of a short column, high-flow carrier gas velocity and fast temperature ramping produced an efficient separation of all analytes in about 4 min, allowing a processing rate of 10 samples/h. The present analytical method was validated according to UNI EN ISO/IEC 17025 guidelines for qualitative methods. The range of investigated parameters included the limit of detection, selectivity, linearity, repeatability, robustness and extraction efficiency. High MS-sampling rate, using a benchtop quadrupole mass analyzer, resulted in accurate peak shape definition under both scan and selected ion monitoring modes, and high sensitivity in the latter mode. Therefore, the performances of the method are comparable to the ones obtainable from traditional GC/MS analysis. The method was successfully tested on real samples arising from clinical treatments of hospitalized patients and could profitably be used for clinical studies on anti-estrogenic drug administration.
Gerace, E.; Salomone, A.; Abbadessa, G.; Racca, S.; Vincenti, M.
2011-01-01
A fast screening protocol was developed for the simultaneous determination of nine anti-estrogenic agents (aminoglutethimide, anastrozole, clomiphene, drostanolone, formestane, letrozole, mesterolone, tamoxifen, testolactone) plus five of their metabolites in human urine. After an enzymatic hydrolysis, these compounds can be extracted simultaneously from urine with a simple liquid–liquid extraction at alkaline conditions. The analytes were subsequently analyzed by fast-gas chromatography/mass spectrometry (fast-GC/MS) after derivatization. The use of a short column, high-flow carrier gas velocity and fast temperature ramping produced an efficient separation of all analytes in about 4 min, allowing a processing rate of 10 samples/h. The present analytical method was validated according to UNI EN ISO/IEC 17025 guidelines for qualitative methods. The range of investigated parameters included the limit of detection, selectivity, linearity, repeatability, robustness and extraction efficiency. High MS-sampling rate, using a benchtop quadrupole mass analyzer, resulted in accurate peak shape definition under both scan and selected ion monitoring modes, and high sensitivity in the latter mode. Therefore, the performances of the method are comparable to the ones obtainable from traditional GC/MS analysis. The method was successfully tested on real samples arising from clinical treatments of hospitalized patients and could profitably be used for clinical studies on anti-estrogenic drug administration. PMID:29403714
A rapid and sensitive analytical method for the determination of 14 pyrethroids in water samples.
Feo, M L; Eljarrat, E; Barceló, D
2010-04-09
A simple, efficient and environmentally friendly analytical methodology is proposed for extracting and preconcentrating pyrethroids from water samples prior to gas chromatography-negative ion chemical ionization mass spectrometry (GC-NCI-MS) analysis. Fourteen pyrethroids were selected for this work: bifenthrin, cyfluthrin, lambda-cyhalothrin, cypermethrin, deltamethrin, esfenvalerate, fenvalerate, fenpropathrin, tau-fluvalinate, permethrin, phenothrin, resmethrin, tetramethrin and tralomethrin. The method is based on ultrasound-assisted emulsification-extraction (UAEE) of a water-immiscible solvent in an aqueous medium. Chloroform was used as extraction solvent in the UAEE technique. Target analytes were quantitatively extracted achieving an enrichment factor of 200 when 20 mL aliquot of pure water spiked with pyrethroid standards was extracted. The method was also evaluated with tap water and river water samples. Method detection limits (MDLs) ranged from 0.03 to 35.8 ng L(-1) with RSDs values < or =3-25% (n=5). The coefficients of estimation of the calibration curves obtained following the proposed methodology were > or =0.998. Recovery values were in the range of 45-106%, showing satisfactory robustness of the method for analyzing pyrethroids in water samples. The proposed methodology was applied for the analysis of river water samples. Cypermethrin was detected at concentration levels ranging from 4.94 to 30.5 ng L(-1). Copyright 2010 Elsevier B.V. All rights reserved.
Aptamer-Based Biosensors for Antibiotic Detection: A Review.
Mehlhorn, Asol; Rahimi, Parvaneh; Joseph, Yvonne
2018-06-11
Antibiotic resistance and, accordingly, their pollution because of uncontrolled usage has emerged as a serious problem in recent years. Hence, there is an increased demand to develop robust, easy, and sensitive methods for rapid evaluation of antibiotics and their residues. Among different analytical methods, the aptamer-based biosensors (aptasensors) have attracted considerable attention because of good selectivity, specificity, and sensitivity. This review gives an overview about recently-developed aptasensors for antibiotic detection. The use of various aptamer assays to determine different groups of antibiotics, like β-lactams, aminoglycosides, anthracyclines, chloramphenicol, (fluoro)quinolones, lincosamide, tetracyclines, and sulfonamides are presented in this paper.
Yu, Yi-Kuo
2003-08-15
The exact analytical result for a class of integrals involving (associated) Legendre polynomials of complicated argument is presented. The method employed can in principle be generalized to integrals involving other special functions. This class of integrals also proves useful in the electrostatic problems in which dielectric spheres are involved, which is of importance in modeling the dynamics of biological macromolecules. In fact, with this solution, a more robust foundation is laid for the Generalized Born method in modeling the dynamics of biomolecules. c2003 Elsevier B.V. All rights reserved.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
Hill, Ryan C; Oman, Trent J; Wang, Xiujuan; Shan, Guomin; Schafer, Barry; Herman, Rod A; Tobias, Rowel; Shippar, Jeff; Malayappan, Bhaskar; Sheng, Li; Xu, Austin; Bradshaw, Jason
2017-07-12
As part of the regulatory approval process in Europe, comparison of endogenous soybean allergen levels between genetically engineered (GE) and non-GE plants has been requested. A quantitative multiplex analytical method using tandem mass spectrometry was developed and validated to measure 10 potential soybean allergens from soybean seed. The analytical method was implemented at six laboratories to demonstrate the robustness of the method and further applied to three soybean field studies across multiple growing seasons (including 21 non-GE soybean varieties) to assess the natural variation of allergen levels. The results show environmental factors contribute more than genetic factors to the large variation in allergen abundance (2- to 50-fold between environmental replicates) as well as a large contribution of Gly m 5 and Gly m 6 to the total allergen profile, calling into question the scientific rational for measurement of endogenous allergen levels between GE and non-GE varieties in the safety assessment.
Analytical methods for determination of mycotoxins: An update (2009-2014).
Turner, Nicholas W; Bramhmbhatt, Heli; Szabo-Vezse, Monika; Poma, Alessandro; Coker, Raymond; Piletsky, Sergey A
2015-12-11
Mycotoxins are a problematic and toxic group of small organic molecules that are produced as secondary metabolites by several fungal species that colonise crops. They lead to contamination at both the field and postharvest stages of food production with a considerable range of foodstuffs affected, from coffee and cereals, to dried fruit and spices. With wide ranging structural diversity of mycotoxins, severe toxic effects caused by these molecules and their high chemical stability the requirement for robust and effective detection methods is clear. This paper builds on our previous review and summarises the most recent advances in this field, in the years 2009-2014 inclusive. This review summarises traditional methods such as chromatographic and immunochemical techniques, as well as newer approaches such as biosensors, and optical techniques which are becoming more prevalent. A section on sampling and sample treatment has been prepared to highlight the importance of this step in the analytical methods. We close with a look at emerging technologies that will bring effective and rapid analysis out of the laboratory and into the field. Copyright © 2015 Elsevier B.V. All rights reserved.
Zacharis, Constantinos K; Vastardi, Elli
2018-02-20
In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
Talluri, Murali V N Kumar; Kalariya, Pradipbhai D; Dharavath, Shireesha; Shaikh, Naeem; Garg, Prabha; Ramisetti, Nageswara Rao; Ragampeta, Srinivas
2016-09-01
A novel ultra high performance liquid chromatography method development strategy was ameliorated by applying quality by design approach. The developed systematic approach was divided into five steps (i) Analytical Target Profile, (ii) Critical Quality Attributes, (iii) Risk Assessments of Critical parameters using design of experiments (screening and optimization phases), (iv) Generation of design space, and (v) Process Capability Analysis (Cp) for robustness study using Monte Carlo simulation. The complete quality-by-design-based method development was made automated and expedited by employing sub-2 μm particles column with an ultra high performance liquid chromatography system. Successful chromatographic separation of the Coenzyme Q10 from its biotechnological process related impurities was achieved on a Waters Acquity phenyl hexyl (100 mm × 2.1 mm, 1.7 μm) column with gradient elution of 10 mM ammonium acetate buffer (pH 4.0) and a mixture of acetonitrile/2-propanol (1:1) as the mobile phase. Through this study, fast and organized method development workflow was developed and robustness of the method was also demonstrated. The method was validated for specificity, linearity, accuracy, precision, and robustness in compliance to the International Conference on Harmonization, Q2 (R1) guidelines. The impurities were identified by atmospheric pressure chemical ionization-mass spectrometry technique. Further, the in silico toxicity of impurities was analyzed using TOPKAT and DEREK software. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
A facile fluorescent "turn-off" method for sensing paraquat based on pyranine-paraquat interaction
NASA Astrophysics Data System (ADS)
Zhao, Zuzhi; Zhang, Fengwei; Zhang, Zipin
2018-06-01
Development of a technically simple yet effective method for paraquat (PQ) detection is of great importance due to its high clinical and environmental relevance. In this study, we developed a pyranine-based fluorescent "turn-off" method for PQ sensing based on pyranine-PQ interaction. We investigated the dependence of analytical performance of this method on the experimental conditions, such as the ion strength, medium pH, and so on. Under the optimized conditions, the method is sensitive and selective, and could be used for PQ detection in real-world sample. This study essentially provides a readily accessible fluorescent system for PQ sensing which is cheap, robust, and technically simple, and it is envisaged to find more interesting clinical and environmental applications.
Schindler, Birgit K; Koslitz, Stephan; Meier, Swetlana; Belov, Vladimir N; Koch, Holger M; Weiss, Tobias; Brüning, Thomas; Käfferlein, Heiko U
2012-04-17
N-Methyl- and N-ethyl-2-pyrollidone (NMP and NEP) are frequently used industrial solvents and were shown to be embryotoxic in animal experiments. We developed a sensitive, specific, and robust analytical method based on cooled-injection (CIS) gas chromatography and isotope dilution mass spectrometry to analyze 5-hydroxy-N-ethyl-2-pyrrolidone (5-HNEP) and 2-hydroxy-N-ethylsuccinimide (2-HESI), two newly identified presumed metabolites of NEP, and their corresponding methyl counterparts (5-HNMP, 2-HMSI) in human urine. The urine was spiked with deuterium-labeled analogues of these metabolites. The analytes were separated from urinary matrix by solid-phase extraction and silylated prior to quantification. Validation of this method was carried out by using both, spiked pooled urine samples and urine samples from 56 individuals of the general population with no known occupational exposure to NMP and NEP. Interday and intraday imprecision was better than 8% for all metabolites, while the limits of detection were between 5 and 20 μg/L depending on the analyte. The high sensitivity of the method enables us to quantify NMP and NEP metabolites at current environmental exposures by human biomonitoring.
Boussès, Christine; Ferey, Ludivine; Vedrines, Elodie; Gaudin, Karen
2015-11-10
An innovative combination of green chemistry and quality by design (QbD) approach is presented through the development of an UHPLC method for the analysis of the main degradation products of dextromethorphan hydrobromide. QbD strategy was integrated to the field of green analytical chemistry to improve method understanding while assuring quality and minimizing environmental impacts, and analyst exposure. This analytical method was thoroughly evaluated by applying risk assessment and multivariate analysis tools. After a scouting phase aimed at selecting a suitable stationary phase and an organic solvent in accordance with green chemistry principles, quality risk assessment tools were applied to determine the critical process parameters (CPPs). The effects of the CPPs on critical quality attributes (CQAs), i.e., resolutions, efficiencies, and solvent consumption were further evaluated by means of a screening design. A response surface methodology was then carried out to model CQAs as function of the selected CPPs and the optimal separation conditions were determined through a desirability analysis. Resulting contour plots enabled to establish the design space (DS) (method operable design region) where all CQAs fulfilled the requirements. An experimental validation of the DS proved that quality within the DS was guaranteed; therefore no more robustness study was required before the validation. Finally, this UHPLC method was validated using the concept of total error and was used to analyze a pharmaceutical drug product. Copyright © 2015 Elsevier B.V. All rights reserved.
Midgley, John E M
2011-02-01
To examine the merits of measuring free analytes by ultrafiltration using either diluted or undiluted serum. Confidence in the accuracy of measurements is affected both by problems identified in current systems using semipermeable membranes, the sensitivity of the system to artefacts and comparisons with other imperfect assays. All "gold standard" methods must robustly obey sound physicochemical principles if valid conclusions are to be drawn. Copyright © 2010 Elsevier Inc. All rights reserved.
Schmidt, Kathrin S; Mankertz, Joachim
2018-06-01
A sensitive and robust LC-MS/MS method allowing the rapid screening and confirmation of selective androgen receptor modulators in bovine urine was developed and successfully validated according to Commission Decision 2002/657/EC, chapter 3.1.3 'alternative validation', by applying a matrix-comprehensive in-house validation concept. The confirmation of the analytes in the validation samples was achieved both on the basis of the MRM ion ratios as laid down in Commission Decision 2002/657/EC and by comparison of their enhanced product ion (EPI) spectra with a reference mass spectral library by making use of the QTRAP technology. Here, in addition to the MRM survey scan, EPI spectra were generated in a data-dependent way according to an information-dependent acquisition criterion. Moreover, stability studies of the analytes in solution and in matrix according to an isochronous approach proved the stability of the analytes in solution and in matrix for at least the duration of the validation study. To identify factors that have a significant influence on the test method in routine analysis, a factorial effect analysis was performed. To this end, factors considered to be relevant for the method in routine analysis (e.g. operator, storage duration of the extracts before measurement, different cartridge lots and different hydrolysis conditions) were systematically varied on two levels. The examination of the extent to which these factors influence the measurement results of the individual analytes showed that none of the validation factors exerts a significant influence on the measurement results.
UV Spectrophotometric Method for Estimation of Polypeptide-K in Bulk and Tablet Dosage Forms
NASA Astrophysics Data System (ADS)
Kaur, P.; Singh, S. Kumar; Gulati, M.; Vaidya, Y.
2016-01-01
An analytical method for estimation of polypeptide-k using UV spectrophotometry has been developed and validated for bulk as well as tablet dosage form. The developed method was validated for linearity, precision, accuracy, specificity, robustness, detection, and quantitation limits. The method has shown good linearity over the range from 100.0 to 300.0 μg/ml with a correlation coefficient of 0.9943. The percentage recovery of 99.88% showed that the method was highly accurate. The precision demonstrated relative standard deviation of less than 2.0%. The LOD and LOQ of the method were found to be 4.4 and 13.33, respectively. The study established that the proposed method is reliable, specific, reproducible, and cost-effective for the determination of polypeptide-k.
Model-Based Method for Sensor Validation
NASA Technical Reports Server (NTRS)
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
NASA Technical Reports Server (NTRS)
Song, Q.; Putcha, L.; Harm, D. L. (Principal Investigator)
2001-01-01
A chromatographic method for the quantitation of promethazine (PMZ) and its three metabolites in urine employing on-line solid-phase extraction and column-switching has been developed. The column-switching system described here uses an extraction column for the purification of PMZ and its metabolites from a urine matrix. The extraneous matrix interference was removed by flushing the extraction column with a gradient elution. The analytes of interest were then eluted onto an analytical column for further chromatographic separation using a mobile phase of greater solvent strength. This method is specific and sensitive with a range of 3.75-1400 ng/ml for PMZ and 2.5-1400 ng/ml for the metabolites promethazine sulfoxide, monodesmethyl promethazine sulfoxide and monodesmethyl promethazine. The lower limits of quantitation (LLOQ) were 3.75 ng/ml with less than 6.2% C.V. for PMZ and 2.50 ng/ml with less than 11.5% C.V. for metabolites based on a signal-to-noise ratio of 10:1 or greater. The accuracy and precision were within +/- 11.8% in bias and not greater than 5.5% C.V. in intra- and inter-assay precision for PMZ and metabolites. Method robustness was investigated using a Plackett-Burman experimental design. The applicability of the analytical method for pharmacokinetic studies in humans is illustrated.
Multi-application controls: Robust nonlinear multivariable aerospace controls applications
NASA Technical Reports Server (NTRS)
Enns, Dale F.; Bugajski, Daniel J.; Carter, John; Antoniewicz, Bob
1994-01-01
This viewgraph presentation describes the general methodology used to apply Honywell's Multi-Application Control (MACH) and the specific application to the F-18 High Angle-of-Attack Research Vehicle (HARV) including piloted simulation handling qualities evaluation. The general steps include insertion of modeling data for geometry and mass properties, aerodynamics, propulsion data and assumptions, requirements and specifications, e.g. definition of control variables, handling qualities, stability margins and statements for bandwidth, control power, priorities, position and rate limits. The specific steps include choice of independent variables for least squares fits to aerodynamic and propulsion data, modifications to the management of the controls with regard to integrator windup and actuation limiting and priorities, e.g. pitch priority over roll, and command limiting to prevent departures and/or undesirable inertial coupling or inability to recover to a stable trim condition. The HARV control problem is characterized by significant nonlinearities and multivariable interactions in the low speed, high angle-of-attack, high angular rate flight regime. Systematic approaches to the control of vehicle motions modeled with coupled nonlinear equations of motion have been developed. This paper will discuss the dynamic inversion approach which explicity accounts for nonlinearities in the control design. Multiple control effectors (including aerodynamic control surfaces and thrust vectoring control) and sensors are used to control the motions of the vehicles in several degrees-of-freedom. Several maneuvers will be used to illustrate performance of MACH in the high angle-of-attack flight regime. Analytical methods for assessing the robust performance of the multivariable control system in the presence of math modeling uncertainty, disturbances, and commands have reached a high level of maturity. The structured singular value (mu) frequency response methodology is presented as a method for analyzing robust performance and the mu-synthesis method will be presented as a method for synthesizing a robust control system. The paper concludes with the author's expectations regarding future applications of robust nonlinear multivariable controls.
Sharma, Teenu; Khurana, Rajneet Kaur; Jain, Atul; Katare, O P; Singh, Bhupinder
2018-05-01
The current research work envisages an analytical quality by design-enabled development of a simple, rapid, sensitive, specific, robust and cost-effective stability-indicating reversed-phase high-performance liquid chromatographic method for determining stress-induced forced-degradation products of sorafenib tosylate (SFN). An Ishikawa fishbone diagram was constructed to embark upon analytical target profile and critical analytical attributes, i.e. peak area, theoretical plates, retention time and peak tailing. Factor screening using Taguchi orthogonal arrays and quality risk assessment studies carried out using failure mode effect analysis aided the selection of critical method parameters, i.e. mobile phase ratio and flow rate potentially affecting the chosen critical analytical attributes. Systematic optimization using response surface methodology of the chosen critical method parameters was carried out employing a two-factor-three-level-13-run, face-centered cubic design. A method operable design region was earmarked providing optimum method performance using numerical and graphical optimization. The optimum method employed a mobile phase composition consisting of acetonitrile and water (containing orthophosphoric acid, pH 4.1) at 65:35 v/v at a flow rate of 0.8 mL/min with UV detection at 265 nm using a C 18 column. Response surface methodology validation studies confirmed good efficiency and sensitivity of the developed method for analysis of SFN in mobile phase as well as in human plasma matrix. The forced degradation studies were conducted under different recommended stress conditions as per ICH Q1A (R2). Mass spectroscopy studies showed that SFN degrades in strongly acidic, alkaline and oxidative hydrolytic conditions at elevated temperature, while the drug was per se found to be photostable. Oxidative hydrolysis using 30% H 2 O 2 showed maximum degradation with products at retention times of 3.35, 3.65, 4.20 and 5.67 min. The absence of any significant change in the retention time of SFN and degradation products, formed under different stress conditions, ratified selectivity and specificity of the systematically developed method. Copyright © 2017 John Wiley & Sons, Ltd.
An Analysis of the Optimal Control Modification Method Applied to Flutter Suppression
NASA Technical Reports Server (NTRS)
Drew, Michael; Nguyen, Nhan T.; Hashemi, Kelley E.; Ting, Eric; Chaparro, Daniel
2017-01-01
Unlike basic Model Reference Adaptive Control (MRAC)l, Optimal Control Modification (OCM) has been shown to be a promising MRAC modification with robustness and analytical properties not present in other adaptive control methods. This paper presents an analysis of the OCM method, and how the asymptotic property of OCM is useful for analyzing and tuning the controller. We begin with a Lyapunov stability proof of an OCM controller having two adaptive gain terms, then the less conservative and easily analyzed OCM asymptotic property is presented. Two numerical examples are used to show how this property can accurately predict steady state stability and quantitative robustness in the presence of time delay, and relative to linear plant perturbations, and nominal Loop Transfer Recovery (LTR) tuning. The asymptotic property of the OCM controller is then used as an aid in tuning the controller applied to a large scale aeroservoelastic longitudinal aircraft model for flutter suppression. Control with OCM adaptive augmentation is shown to improve performance over that of the nominal non-adaptive controller when significant disparities exist between the controller/observer model and the true plant model.
NASA Astrophysics Data System (ADS)
Lawton, Zachary E.; Traub, Angelica; Fatigante, William L.; Mancias, Jose; O'Leary, Adam E.; Hall, Seth E.; Wieland, Jamie R.; Oberacher, Herbert; Gizzi, Michael C.; Mulligan, Christopher C.
2017-06-01
Forensic evidentiary backlogs are indicative of the growing need for cost-effective, high-throughput instrumental methods. One such emerging technology that shows high promise in meeting this demand while also allowing on-site forensic investigation is portable mass spectrometric (MS) instrumentation, particularly that which enables the coupling to ambient ionization techniques. While the benefits of rapid, on-site screening of contraband can be anticipated, the inherent legal implications of field-collected data necessitates that the analytical performance of technology employed be commensurate with accepted techniques. To this end, comprehensive analytical validation studies are required before broad incorporation by forensic practitioners can be considered, and are the focus of this work. Pertinent performance characteristics such as throughput, selectivity, accuracy/precision, method robustness, and ruggedness have been investigated. Reliability in the form of false positive/negative response rates is also assessed, examining the effect of variables such as user training and experience level. To provide flexibility toward broad chemical evidence analysis, a suite of rapidly-interchangeable ion sources has been developed and characterized through the analysis of common illicit chemicals and emerging threats like substituted phenethylamines. [Figure not available: see fulltext.
Machado, Raquel C; Amaral, Clarice D B; Nóbrega, Joaquim A; Araujo Nogueira, Ana Rita
2017-06-14
A microwave-induced plasma optical emission spectrometer with N 2 -based plasma was combined with a multimode sample introduction system (MSIS) for hydride generation (HG) and multielemental determination of As, Bi, Ge, Sb, and Sn in samples of forage, bovine liver, powdered milk, agricultural gypsum, rice, and mineral fertilizer, using a single condition of prereduction and reduction. The accuracy of the developed analytical method was evaluated using certified reference materials of water and mineral fertilizer, and recoveries ranged from 95 to 106%. Addition and recovery experiments were carried out, and the recoveries varied from 85 to 117% for all samples evaluated. The limits of detection for As, Bi, Ge, Sb, and Sn were 0.46, 0.09, 0.19, 0.46, and 5.2 μg/L, respectively, for liquid samples, and 0.18, 0.04, 0.08, 0.19, and 2.1 mg/kg, respectively, for solid samples. The method proposed offers a simple, fast, multielemental, and robust alternative for successful determination of all five analytes in agricultural samples with low operational cost without compromising analytical performance.
Russell, Shane R; Claridge, Shelley A
2016-04-01
Because noncovalent interface functionalization is frequently required in graphene-based devices, biomolecular self-assembly has begun to emerge as a route for controlling substrate electronic structure or binding specificity for soluble analytes. The remarkable diversity of structures that arise in biological self-assembly hints at the possibility of equally diverse and well-controlled surface chemistry at graphene interfaces. However, predicting and analyzing adsorbed monolayer structures at such interfaces raises substantial experimental and theoretical challenges. In contrast with the relatively well-developed monolayer chemistry and characterization methods applied at coinage metal surfaces, monolayers on graphene are both less robust and more structurally complex, levying more stringent requirements on characterization techniques. Theory presents opportunities to understand early binding events that lay the groundwork for full monolayer structure. However, predicting interactions between complex biomolecules, solvent, and substrate is necessitating a suite of new force fields and algorithms to assess likely binding configurations, solvent effects, and modulations to substrate electronic properties. This article briefly discusses emerging analytical and theoretical methods used to develop a rigorous chemical understanding of the self-assembly of peptide-graphene interfaces and prospects for future advances in the field.
Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P
2010-10-22
A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.
In-vivo analysis of ankle joint movement for patient-specific kinematic characterization.
Ferraresi, Carlo; De Benedictis, Carlo; Franco, Walter; Maffiodo, Daniela; Leardini, Alberto
2017-09-01
In this article, a method for the experimental in-vivo characterization of the ankle kinematics is proposed. The method is meant to improve personalization of various ankle joint treatments, such as surgical decision-making or design and application of an orthosis, possibly to increase their effectiveness. This characterization in fact would make the treatments more compatible with the specific patient's joint physiological conditions. This article describes the experimental procedure and the analytical method adopted, based on the instantaneous and mean helical axis theories. The results obtained in this experimental analysis reveal that more accurate techniques are necessary for a robust in-vivo assessment of the tibio-talar axis of rotation.
Oliva, Alexis; Fariña, José B; Llabrés, Matías
2013-10-15
A simple and reproducible UPLC method was developed and validated for the quantitative analysis of finasteride in low-dose drug products. Method validation demonstrated the reliability and consistency of analytical results. Due to the regulatory requirements of pharmaceutical analysis in particular, evaluation of robustness is vital to predict how small variations in operating conditions affect the responses. Response surface methodology as an optimization technique was used to evaluate the robustness. For this, a central composite design was implemented around the nominal conditions. Statistical treatment of the responses (retention factor and drug concentrations expressed as percentage of label claim) showed that methanol content in mobile-phase and flow rate were the most influential factors. In the optimization process, the compromise decision support problem (cDSP) strategy was used. Construction of the robust domain from response-surfaces provided tolerance windows for the factors affecting the effectiveness of the method. The specified limits for the USP uniformity of dosage units assay (98.5-101.5%) and the purely experimental variations based on the repeatability test for center points (nominal conditions repetitions) were used as criteria to establish the tolerance windows, which allowed definition design space (DS) of analytical method. Thus, the acceptance criteria values (AV) proposed by the USP-uniformity of assay only depend on the sampling error. If the variation in the responses corresponded to approximately twice the repeatability standard deviation, individual values for percentage label claim (%LC) response may lie outside the specified limits; this implies the data are not centered between the specified limits, and that this term plus the sampling error affects the AV value. To avoid this fact, the limits specified by the Uniformity of Dosage Form assay (i.e., 98.5-101.5%) must be taken into consideration to fix the tolerance windows for each factor. All these results were verified by the Monte Carlo simulation. In conclusion, the level of variability for different factors must be calculated for each case, and not arbitrary way, provided a variation is found higher than the repeatability for center points and secondly, the %LC response must lie inside the specified limits i.e., 98.5-101.5%. If not the UPLC method must be re-developed. © 2013 Elsevier B.V. All rights reserved.
A Study on the Requirements for Fast Active Turbine Tip Clearance Control Systems
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan A.; Melcher, Kevin J.
2004-01-01
This paper addresses the requirements of a control system for active turbine tip clearance control in a generic commercial turbofan engine through design and analysis. The control objective is to articulate the shroud in the high pressure turbine section in order to maintain a certain clearance set point given several possible engine transient events. The system must also exhibit reasonable robustness to modeling uncertainties and reasonable noise rejection properties. Two actuators were chosen to fulfill such a requirement, both of which possess different levels of technological readiness: electrohydraulic servovalves and piezoelectric stacks. Identification of design constraints, desired actuator parameters, and actuator limitations are addressed in depth; all of which are intimately tied with the hardware and controller design process. Analytical demonstrations of the performance and robustness characteristics of the two axisymmetric LQG clearance control systems are presented. Takeoff simulation results show that both actuators are capable of maintaining the clearance within acceptable bounds and demonstrate robustness to parameter uncertainty. The present model-based control strategy was employed to demonstrate the tradeoff between performance, control effort, and robustness and to implement optimal state estimation in a noisy engine environment with intent to eliminate ad hoc methods for designing reliable control systems.
The Analytic Hierarchy Process and Participatory Decisionmaking
Daniel L. Schmoldt; Daniel L. Peterson; Robert L. Smith
1995-01-01
Managing natural resource lands requires social, as well as biophysical, considerations. Unfortunately, it is extremely difficult to accurately assess and quantify changing social preferences, and to aggregate conflicting opinions held by diverse social groups. The Analytic Hierarchy Process (AHP) provides a systematic, explicit, rigorous, and robust mechanism for...
Andersson, Maria; Stephanson, Nikolai; Ohman, Inger; Terzuoli, Tommy; Lindh, Jonatan D; Beck, Olof
2014-04-01
Opiates comprise a class of abused drugs that is of primary interest in clinical and forensic urine drug testing. Determination of heroin, codeine, or a multi-drug ingestion is complicated since both heroin and codeine can lead to urinary excretion of free and conjugated morphine. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) offers advantage over gas chromatography-mass spectrometry by simplifying sample preparation but increases the number of analytes. A method based on direct injection of five-fold diluted urine for confirmation of morphine, morphine-3-glucuronide, morphine-6-glucuronide, codeine, codeine-6-glucuronide and 6-acetylmorphine was validated using LC-MS/MS in positive electrospray mode monitoring two transitions using selected reaction monitoring. The method was applied for the analysis of 3155 unknown urine samples which were positive for opiates in immunochemical screening. A linear response was observed for all compounds in the calibration curves covering more than three orders of magnitude. Cut off was set to 2 ng/ml for 6-acetylmorphine and 150 ng/ml for the other analytes. 6-Acetylmorphine was found to be effective (sensitivity 82%) in detecting samples as heroin intake. Morphine-3-glucuronide and codeine-6-glucuronide was the predominant components of total morphine and codeine, 84% and 93%, respectively. The authors have validated a robust LC-MS/MS method for rapid qualitative and quantitative analysis of opiates in urine. 6-Acetylmorphine has been demonstrated as a sensitive and important parameter for a heroin intake. A possible interpretation strategy to conclude the source of detected analytes was proposed. The method might be further developed by reducing the number of analytes to morphine-3-glucuronide, codeine-6-glucuronide and 6-acetylmorphine without compromising test performance. Copyright © 2013 John Wiley & Sons, Ltd.
Robust volcano plot: identification of differential metabolites in the presence of outliers.
Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro
2018-04-11
The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
Vialaret, Jérôme; Picas, Alexia; Delaby, Constance; Bros, Pauline; Lehmann, Sylvain; Hirtz, Christophe
2018-06-01
Hepcidin-25 peptide is a biomarker which is known to have considerable clinical potential for diagnosing iron-related diseases. Developing analytical methods for the absolute quantification of hepcidin is still a real challenge, however, due to the sensitivity, specificity and reproducibility issues involved. In this study, we compare and discuss two MS-based assays for quantifying hepcidin, which differ only in terms of the type of liquid chromatography (nano LC/MS versus standard LC/MS) involved. The same sample preparation, the same internal standards and the same MS analyzer were used with both approaches. In the field of proteomics, nano LC chromatography is generally known to be more sensitive and less robust than standard LC methods. In this study, we established that the performances of the standard LC method are equivalent to those of our previously developed nano LC method. Although the analytical performances were very similar in both cases. The standard-flow platform therefore provides the more suitable alternative for accurately determining hepcidin in clinical settings. Copyright © 2018 Elsevier B.V. All rights reserved.
Analytical Tools to Improve Optimization Procedures for Lateral Flow Assays
Hsieh, Helen V.; Dantzler, Jeffrey L.; Weigl, Bernhard H.
2017-01-01
Immunochromatographic or lateral flow assays (LFAs) are inexpensive, easy to use, point-of-care medical diagnostic tests that are found in arenas ranging from a doctor’s office in Manhattan to a rural medical clinic in low resource settings. The simplicity in the LFA itself belies the complex task of optimization required to make the test sensitive, rapid and easy to use. Currently, the manufacturers develop LFAs by empirical optimization of material components (e.g., analytical membranes, conjugate pads and sample pads), biological reagents (e.g., antibodies, blocking reagents and buffers) and the design of delivery geometry. In this paper, we will review conventional optimization and then focus on the latter and outline analytical tools, such as dynamic light scattering and optical biosensors, as well as methods, such as microfluidic flow design and mechanistic models. We are applying these tools to find non-obvious optima of lateral flow assays for improved sensitivity, specificity and manufacturing robustness. PMID:28555034
NASA Astrophysics Data System (ADS)
Pinho, Ludmila A. G.; Sá-Barreto, Lívia C. L.; Infante, Carlos M. C.; Cunha-Filho, Marcílio S. S.
2016-04-01
The aim of this work was the development of an analytical procedure using spectrophotometry for simultaneous determination of benznidazole (BNZ) and itraconazole (ITZ) in a medicine used for the treatment of Chagas disease. In order to achieve this goal, the analysis of mixtures was performed applying the Lambert-Beer law through the absorbances of BNZ and ITZ in the wavelengths 259 and 321 nm, respectively. Diverse tests were carried out for development and validation of the method, which proved to be selective, robust, linear, and precise. The lower limits of detection and quantification demonstrate its sensitivity to quantify small amounts of analytes, enabling its application for various analytical purposes, such as dissolution test and routine assays. In short, the quantification of BNZ and ITZ by analysis of mixtures had shown to be efficient and cost-effective alternative for determination of these drugs in a pharmaceutical dosage form.
Guidance to Achieve Accurate Aggregate Quantitation in Biopharmaceuticals by SV-AUC.
Arthur, Kelly K; Kendrick, Brent S; Gabrielson, John P
2015-01-01
The levels and types of aggregates present in protein biopharmaceuticals must be assessed during all stages of product development, manufacturing, and storage of the finished product. Routine monitoring of aggregate levels in biopharmaceuticals is typically achieved by size exclusion chromatography (SEC) due to its high precision, speed, robustness, and simplicity to operate. However, SEC is error prone and requires careful method development to ensure accuracy of reported aggregate levels. Sedimentation velocity analytical ultracentrifugation (SV-AUC) is an orthogonal technique that can be used to measure protein aggregation without many of the potential inaccuracies of SEC. In this chapter, we discuss applications of SV-AUC during biopharmaceutical development and how characteristics of the technique make it better suited for some applications than others. We then discuss the elements of a comprehensive analytical control strategy for SV-AUC. Successful implementation of these analytical control elements ensures that SV-AUC provides continued value over the long time frames necessary to bring biopharmaceuticals to market. © 2015 Elsevier Inc. All rights reserved.
Pinho, Ludmila A G; Sá-Barreto, Lívia C L; Infante, Carlos M C; Cunha-Filho, Marcílio S S
2016-04-15
The aim of this work was the development of an analytical procedure using spectrophotometry for simultaneous determination of benznidazole (BNZ) and itraconazole (ITZ) in a medicine used for the treatment of Chagas disease. In order to achieve this goal, the analysis of mixtures was performed applying the Lambert-Beer law through the absorbances of BNZ and ITZ in the wavelengths 259 and 321 nm, respectively. Diverse tests were carried out for development and validation of the method, which proved to be selective, robust, linear, and precise. The lower limits of detection and quantification demonstrate its sensitivity to quantify small amounts of analytes, enabling its application for various analytical purposes, such as dissolution test and routine assays. In short, the quantification of BNZ and ITZ by analysis of mixtures had shown to be efficient and cost-effective alternative for determination of these drugs in a pharmaceutical dosage form. Copyright © 2016. Published by Elsevier B.V.
Analytical robustness of quantitative NIR chemical imaging for Islamic paper characterization
NASA Astrophysics Data System (ADS)
Mahgoub, Hend; Gilchrist, John R.; Fearn, Thomas; Strlič, Matija
2017-07-01
Recently, spectral imaging techniques such as Multispectral (MSI) and Hyperspectral Imaging (HSI) have gained importance in the field of heritage conservation. This paper explores the analytical robustness of quantitative chemical imaging for Islamic paper characterization by focusing on the effect of different measurement and processing parameters, i.e. acquisition conditions and calibration on the accuracy of the collected spectral data. This will provide a better understanding of the technique that can provide a measure of change in collections through imaging. For the quantitative model, special calibration target was devised using 105 samples from a well-characterized reference Islamic paper collection. Two material properties were of interest: starch sizing and cellulose degree of polymerization (DP). Multivariate data analysis methods were used to develop discrimination and regression models which were used as an evaluation methodology for the metrology of quantitative NIR chemical imaging. Spectral data were collected using a pushbroom HSI scanner (Gilden Photonics Ltd) in the 1000-2500 nm range with a spectral resolution of 6.3 nm using a mirror scanning setup and halogen illumination. Data were acquired at different measurement conditions and acquisition parameters. Preliminary results showed the potential of the evaluation methodology to show that measurement parameters such as the use of different lenses and different scanning backgrounds may not have a great influence on the quantitative results. Moreover, the evaluation methodology allowed for the selection of the best pre-treatment method to be applied to the data.
Characteristics of silver nanoparticles in vehicles for biological applications.
Kejlová, Kristina; Kašpárková, Věra; Krsek, Daniel; Jírová, Dagmar; Kolářová, Hana; Dvořáková, Markéta; Tománková, Kateřina; Mikulcová, Veronika
2015-12-30
Silver nanoparticles (AgNPs) have been used for decades as anti-bacterial agents in various industrial fields such as cosmetics, health industry, food storage, textile coatings and environmental applications, although their toxicity is not fully recognized yet. Antimicrobial and catalytic activity of AgNPs depends on their size as well as structure, shape, size distribution, and physico-chemical environment. The unique properties of AgNPs require novel or modified toxicological methods for evaluation of their toxic potential combined with robust analytical methods for characterization of nanoparticles applied in relevant vehicles, e.g., culture medium with/without serum and phosphate buffered saline. Copyright © 2015 Elsevier B.V. All rights reserved.
Exposure assessment for endocrine disruptors: some considerations in the design of studies.
Rice, Carol; Birnbaum, Linda S; Cogliano, James; Mahaffey, Kathryn; Needham, Larry; Rogan, Walter J; vom Saal, Frederick S
2003-01-01
In studies designed to evaluate exposure-response relationships in children's development from conception through puberty, multiple factors that affect the generation of meaningful exposure metrics must be considered. These factors include multiple routes of exposure; the timing, frequency, and duration of exposure; need for qualitative and quantitative data; sample collection and storage protocols; and the selection and documentation of analytic methods. The methods for exposure data collection and analysis must be sufficiently robust to accommodate the a priori hypotheses to be tested, as well as hypotheses generated from the data. A number of issues that must be considered in study design are summarized here. PMID:14527851
Fuselli, Fabio; Deluca, Anna; Montepeloso, Emanuela A; Ibba, Giulia; Tidona, Flavio; Longo, Lucia; Marianella, Rosa M
2015-10-01
Prevention of food fraud in the dairy field is a difficult issue for researchers, industries and policy makers, both for commercial and health reasons. Currently, no analytical method allows detection of the addition of bovine whey to water buffalo ricotta, so this fraudulent practice cannot be prevented. The authors' aim was to develop such a method. The conditions for extraction and purification of denatured ricotta whey proteins, which are unfolded and coagulated by heating during the production process, were optimized. The optimal composition of the polyacrylamide gel (pH range, type and concentration of chemical separator) was first evaluated and then the best conditions to perform the separation by isoelectric focusing were established. The performance of the method (precision, selectivity, robustness, sensibility) was determined. The method was shown to be reliable and robust for detection of the presence of bovine whey added to water buffalo Ricotta at percentages above 5% (v/v). The results suggest that the differences observed between bovine and water buffalo electrophoretic profiles are due to bovine β-lactoglobulin isoform A, which is never detected in water buffalo samples. © 2014 Society of Chemical Industry.
Validation of the replica trick for simple models
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-04-01
We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.
Non-imaging ray-tracing for sputtering simulation with apodization
NASA Astrophysics Data System (ADS)
Ou, Chung-Jen
2018-04-01
Although apodization patterns have been adopted for the analysis of sputtering sources, the analytical solutions for the film thickness equations are yet limited to only simple conditions. Empirical formulations for thin film sputtering lacking the flexibility in dealing with multi-substrate conditions, a suitable cost-effective procedure is required to estimate the film thickness distribution. This study reports a cross-discipline simulation program, which is based on discrete particle Monte-Carlo methods and has been successfully applied to a non-imaging design to solve problems associated with sputtering uniformity. Robustness of the present method is first proved by comparing it with a typical analytical solution. Further, this report also investigates the overall all effects cause by the sizes of the deposited substrate, such that the determination of the distance between the target surface and the apodization index can be complete. This verifies the capability of the proposed method for solving the sputtering film thickness problems. The benefit is that an optical thin film engineer can, using the same optical software, design a specific optical component and consider the possible coating qualities with thickness tolerance, during the design stage.
Non-imaging ray-tracing for sputtering simulation with apodization
NASA Astrophysics Data System (ADS)
Ou, Chung-Jen
2018-06-01
Although apodization patterns have been adopted for the analysis of sputtering sources, the analytical solutions for the film thickness equations are yet limited to only simple conditions. Empirical formulations for thin film sputtering lacking the flexibility in dealing with multi-substrate conditions, a suitable cost-effective procedure is required to estimate the film thickness distribution. This study reports a cross-discipline simulation program, which is based on discrete particle Monte-Carlo methods and has been successfully applied to a non-imaging design to solve problems associated with sputtering uniformity. Robustness of the present method is first proved by comparing it with a typical analytical solution. Further, this report also investigates the overall all effects cause by the sizes of the deposited substrate, such that the determination of the distance between the target surface and the apodization index can be complete. This verifies the capability of the proposed method for solving the sputtering film thickness problems. The benefit is that an optical thin film engineer can, using the same optical software, design a specific optical component and consider the possible coating qualities with thickness tolerance, during the design stage.
NASA Astrophysics Data System (ADS)
Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.
2017-11-01
We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.
Trimpin, Sarah; Deinzer, Max L
2007-01-01
A mini ball mill (MBM) solvent-free matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS) method allows for the analysis of bacteriorhodopsin (BR), an integral membrane protein that previously presented special analytical problems. For well-defined signals in the molecular ion region of the analytes, a desalting procedure of the MBM sample directly on the MALDI target plate was used to reduce adduction by sodium and other cations that are normally attendant with hydrophobic peptides and proteins as a result of the sample preparation procedure. Mass analysis of the intact hydrophobic protein and the few hydrophobic and hydrophilic tryptic peptides available in the digest is demonstrated with this robust new approach. MS and MS/MS spectra of BR tryptic peptides and intact protein were generally superior to the traditional solvent-based method using the desalted "dry" MALDI preparation procedure. The solvent-free method expands the range of peptides that can be effectively analyzed by MALDI-MS to those that are hydrophobic and solubility-limited.
Failure detection system design methodology. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chow, E. Y.
1980-01-01
The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.
Robustness and fragility in coupled oscillator networks under targeted attacks.
Yuan, Tianyu; Aihara, Kazuyuki; Tanaka, Gouhei
2017-01-01
The dynamical tolerance of coupled oscillator networks against local failures is studied. As the fraction of failed oscillator nodes gradually increases, the mean oscillation amplitude in the entire network decreases and then suddenly vanishes at a critical fraction as a phase transition. This critical fraction, widely used as a measure of the network robustness, was analytically derived for random failures but not for targeted attacks so far. Here we derive the general formula for the critical fraction, which can be applied to both random failures and targeted attacks. We consider the effects of targeting oscillator nodes based on their degrees. First we deal with coupled identical oscillators with homogeneous edge weights. Then our theory is applied to networks with heterogeneous edge weights and to those with nonidentical oscillators. The analytical results are validated by numerical experiments. Our results reveal the key factors governing the robustness and fragility of oscillator networks.
A Bayesian deconvolution strategy for immunoprecipitation-based DNA methylome analysis
Down, Thomas A.; Rakyan, Vardhman K.; Turner, Daniel J.; Flicek, Paul; Li, Heng; Kulesha, Eugene; Gräf, Stefan; Johnson, Nathan; Herrero, Javier; Tomazou, Eleni M.; Thorne, Natalie P.; Bäckdahl, Liselotte; Herberth, Marlis; Howe, Kevin L.; Jackson, David K.; Miretti, Marcos M.; Marioni, John C.; Birney, Ewan; Hubbard, Tim J. P.; Durbin, Richard; Tavaré, Simon; Beck, Stephan
2009-01-01
DNA methylation is an indispensible epigenetic modification of mammalian genomes. Consequently there is great interest in strategies for genome-wide/whole-genome DNA methylation analysis, and immunoprecipitation-based methods have proven to be a powerful option. Such methods are rapidly shifting the bottleneck from data generation to data analysis, necessitating the development of better analytical tools. Until now, a major analytical difficulty associated with immunoprecipitation-based DNA methylation profiling has been the inability to estimate absolute methylation levels. Here we report the development of a novel cross-platform algorithm – Bayesian Tool for Methylation Analysis (Batman) – for analyzing Methylated DNA Immunoprecipitation (MeDIP) profiles generated using arrays (MeDIP-chip) or next-generation sequencing (MeDIP-seq). The latter is an approach we have developed to elucidate the first high-resolution whole-genome DNA methylation profile (DNA methylome) of any mammalian genome. MeDIP-seq/MeDIP-chip combined with Batman represent robust, quantitative, and cost-effective functional genomic strategies for elucidating the function of DNA methylation. PMID:18612301
Identification of complex stiffness tensor from waveform reconstruction
NASA Astrophysics Data System (ADS)
Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.
2002-03-01
An inverse method is proposed in order to determine the viscoelastic properties of composite-material plates from the plane-wave transmitted acoustic field. Analytical formulations of both the plate transmission coefficient and its first and second derivatives are established, and included in a two-step inversion scheme. Two objective functions to be minimized are then designed by considering the well-known maximum-likelihood principle and by using an analytic signal formulation. Through these innovative objective functions, the robustness of the inversion process against high level of noise in waveforms is improved and the method can be applied to a very thin specimen. The suitability of the inversion process for viscoelastic property identification is demonstrated using simulated data for composite materials with different anisotropy and damping degrees. A study of the effect of the rheologic model choice on the elastic property identification emphasizes the relevance of using a phenomenological description considering viscosity. Experimental characterizations show then the good reliability of the proposed approach. Difficulties arise experimentally for particular anisotropic media.
NASA Technical Reports Server (NTRS)
Andreadis, Dean; Drake, Alan; Garrett, Joseph L.; Gettinger, Christopher D.; Hoxie, Stephen S.
2003-01-01
The development and ground test of a rocket-based combined cycle (RBCC) propulsion system is being conducted as part of the NASA Marshall Space Flight Center (MSFC) Integrated System Test of an Airbreathing Rocket (ISTAR) program. The eventual flight vehicle (X-43B) is designed to support an air-launched self-powered Mach 0.7 to 7.0 demonstration of an RBCC engine through all of its airbreathing propulsion modes - air augmented rocket (AAR), ramjet (RJ), and scramjet (SJ). Through the use of analytical tools, numerical simulations, and experimental tests the ISTAR program is developing and validating a hydrocarbon-fueled RBCC combustor design methodology. This methodology will then be used to design an integrated RBCC propulsion system that produces robust ignition and combustion stability characteristics while maximizing combustion efficiency and minimizing drag losses. First order analytical and numerical methods used to design hydrocarbon-fueled combustors are discussed with emphasis on the methods and determination of requirements necessary to establish engine operability and performance characteristics.
NASA Technical Reports Server (NTRS)
Andreadis, Dean; Drake, Alan; Garrett, Joseph L.; Gettinger, Christopher D.; Hoxie, Stephen S.
2002-01-01
The development and ground test of a rocket-based combined cycle (RBCC) propulsion system is being conducted as part of the NASA Marshall Space Flight Center (MSFC) Integrated System Test of an Airbreathing Rocket (ISTAR) program. The eventual flight vehicle (X-43B) is designed to support an air-launched self-powered Mach 0.7 to 7.0 demonstration of an RBCC engine through all of its airbreathing propulsion modes - air augmented rocket (AAR), ramjet (RJ), and scramjet (SJ). Through the use of analytical tools, numerical simulations, and experimental tests the ISTAR program is developing and validating a hydrocarbon-fueled RBCC combustor design methodology. This methodology will then be used to design an integrated RBCC propulsion system thai: produces robust ignition and combustion stability characteristics while maximizing combustion efficiency and minimizing drag losses. First order analytical and numerical methods used to design hydrocarbon-fueled combustors are discussed with emphasis on the methods and determination of requirements necessary to establish engine operability and performance characteristics.
Ueland, Maiken; Blanes, Lucas; Taudte, Regina V; Stuart, Barbara H; Cole, Nerida; Willis, Peter; Roux, Claude; Doble, Philip
2016-03-04
A novel microfluidic paper-based analytical device (μPAD) was designed to filter, extract, and pre-concentrate explosives from soil for direct analysis by a lab on a chip (LOC) device. The explosives were extracted via immersion of wax-printed μPADs directly into methanol soil suspensions for 10min, whereby dissolved explosives travelled upwards into the μPAD circular sampling reservoir. A chad was punched from the sampling reservoir and inserted into a LOC well containing the separation buffer for direct analysis, avoiding any further extraction step. Eight target explosives were separated and identified by fluorescence quenching. The minimum detectable amounts for all eight explosives were between 1.4 and 5.6ng with recoveries ranging from 53-82% from the paper chad, and 12-40% from soil. This method provides a robust and simple extraction method for rapid identification of explosives in complex soil samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Folding-paper-based preconcentrator for low dispersion of preconcentration plug
NASA Astrophysics Data System (ADS)
Lee, Kyungjae; Yoo, Yong Kyoung; Han, Sung Il; Lee, Junwoo; Lee, Dohwan; Kim, Cheonjung; Lee, Jeong Hoon
2017-12-01
Ion concentration polarization (ICP) has been widely studied for collecting target analytes as it is a powerful preconcentrator method employed for charged molecules. Although the method is quite robust, simple, cheap, and yields a high preconcentration factor, a major hurdle to be addressed is extracting the preconcentrated samples without dispersing the plug. This study investigates a 3D folding-paper-based ICP preconcentrator for preconcentrated plug extraction without the dispersion effect. The ICP preconcentrator is printed on a cellulose paper with pre-patterned hydrophobic wax. To extract and isolate the preconcentration plug with minimal dispersion, a 3D pop-up structure is fabricated via water drain, and a preconcentration factor of 300-fold for 10 min is achieved. By optimizing factors such as the electric field, water drain, and sample volume, the technique was enhanced by facilitating sample preconcentration and isolation, thereby providing the possibility for extensive applications in analytical devices such as lateral flow assays and FTAR cards.
Lee, L.; Helsel, D.
2007-01-01
Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.
Robustness of high-fidelity Rydberg gates with single-site addressability
NASA Astrophysics Data System (ADS)
Goerz, Michael H.; Halperin, Eli J.; Aytac, Jon M.; Koch, Christiane P.; Whaley, K. Birgitta
2014-09-01
Controlled-phase (cphase) gates can be realized with trapped neutral atoms by making use of the Rydberg blockade. Achieving the ultrahigh fidelities required for quantum computation with such Rydberg gates, however, is compromised by experimental inaccuracies in pulse amplitudes and timings, as well as by stray fields that cause fluctuations of the Rydberg levels. We report here a comparative study of analytic and numerical pulse sequences for the Rydberg cphase gate that specifically examines the robustness of the gate fidelity with respect to such experimental perturbations. Analytical pulse sequences of both simultaneous and stimulated Raman adiabatic passage (STIRAP) are found to be at best moderately robust under these perturbations. In contrast, optimal control theory is seen to allow generation of numerical pulses that are inherently robust within a predefined tolerance window. The resulting numerical pulse shapes display simple modulation patterns and can be rationalized in terms of an interference between distinct two-photon Rydberg excitation pathways. Pulses of such low complexity should be experimentally feasible, allowing gate fidelities of order 99.90-99.99% to be achievable under realistic experimental conditions.
Probability-based hazard avoidance guidance for planetary landing
NASA Astrophysics Data System (ADS)
Yuan, Xu; Yu, Zhengshi; Cui, Pingyuan; Xu, Rui; Zhu, Shengying; Cao, Menglong; Luan, Enjie
2018-03-01
Future landing and sample return missions on planets and small bodies will seek landing sites with high scientific value, which may be located in hazardous terrains. Autonomous landing in such hazardous terrains and highly uncertain planetary environments is particularly challenging. Onboard hazard avoidance ability is indispensable, and the algorithms must be robust to uncertainties. In this paper, a novel probability-based hazard avoidance guidance method is developed for landing in hazardous terrains on planets or small bodies. By regarding the lander state as probabilistic, the proposed guidance algorithm exploits information on the uncertainty of lander position and calculates the probability of collision with each hazard. The collision probability serves as an accurate safety index, which quantifies the impact of uncertainties on the lander safety. Based on the collision probability evaluation, the state uncertainty of the lander is explicitly taken into account in the derivation of the hazard avoidance guidance law, which contributes to enhancing the robustness to the uncertain dynamics of planetary landing. The proposed probability-based method derives fully analytic expressions and does not require off-line trajectory generation. Therefore, it is appropriate for real-time implementation. The performance of the probability-based guidance law is investigated via a set of simulations, and the effectiveness and robustness under uncertainties are demonstrated.
Mavel, Sylvie; Lefèvre, Antoine; Bakhos, David; Dufour-Rainfray, Diane; Blasco, Hélène; Emond, Patrick
2018-05-22
Although there is some data from animal studies, the metabolome of inner ear fluid in humans remains unknown. Characterization of the metabolome of the perilymph would allow for better understanding of its role in auditory function and for identification of biomarkers that might allow prediction of response to therapeutics. There is a major technical challenge due to the small sample of perilymph fluid available for analysis (sub-microliter). The objectives of this study were to develop and validate a methodology for analysis of perilymph metabolome using liquid chromatography-high resolution mass spectrometry (LC-HRMS). Due to the low availability of perilymph fluid; a methodological study was first performed using low volumes (0.8 μL) of cerebrospinal fluid (CSF) and optimized the LC-HRMS parameters using targeted and non-targeted metabolomics approaches. We obtained excellent parameters of reproducibility for about 100 metabolites. This methodology was then used to analyze perilymph fluid using two complementary chromatographic supports: reverse phase (RP-C18) and hydrophilic interaction liquid chromatography (HILIC). Both methods were highly robust and showed their complementarity, thus reinforcing the interest to combine these chromatographic supports. A fingerprinting was obtained from 98 robust metabolites (analytical variability <30%), where amino acids (e.g., asparagine, valine, glutamine, alanine, etc.), carboxylic acids and derivatives (e.g., lactate, carnitine, trigonelline, creatinine, etc.) were observed as first-order signals. This work lays the foundations of a robust analytical workflow for the exploration of the perilymph metabolome dedicated to the research of biomarkers for the diagnosis/prognosis of auditory pathologies. Copyright © 2018 Elsevier B.V. All rights reserved.
Chen, Hsiao-Wei; Wu, Chun-Feng; Chu, Lichieh Julie; Chiang, Wei-Fang; Wu, Chih-Ching; Yu, Jau-Song; Tsai, Cheng-Han; Liang, Kung-Hao; Chang, Yu-Sun; Wu, Maureen; Ou Yang, Wei-Ting
2017-01-01
Multiple (selected) reaction monitoring (MRM/SRM) of peptides is a growing technology for target protein quantification because it is more robust, precise, accurate, high-throughput, and multiplex-capable than antibody-based techniques. The technique has been applied clinically to the large-scale quantification of multiple target proteins in different types of fluids. However, previous MRM-based studies have placed less focus on sample-preparation workflow and analytical performance in the precise quantification of proteins in saliva, a noninvasively sampled body fluid. In this study, we evaluated the analytical performance of a simple and robust multiple reaction monitoring (MRM)-based targeted proteomics approach incorporating liquid chromatography with mass spectrometry detection (LC-MRM/MS). This platform was used to quantitatively assess the biomarker potential of a group of 56 salivary proteins that have previously been associated with human cancers. To further enhance the development of this technology for assay of salivary samples, we optimized the workflow for salivary protein digestion and evaluated quantification performance, robustness and technical limitations in analyzing clinical samples. Using a clinically well-characterized cohort of two independent clinical sample sets (total n = 119), we quantitatively characterized these protein biomarker candidates in saliva specimens from controls and oral squamous cell carcinoma (OSCC) patients. The results clearly showed a significant elevation of most targeted proteins in saliva samples from OSCC patients compared with controls. Overall, this platform was capable of assaying the most highly multiplexed panel of salivary protein biomarkers, highlighting the clinical utility of MRM in oral cancer biomarker research. PMID:28235782
Inertia-gravity wave radiation from the elliptical vortex in the f-plane shallow water system
NASA Astrophysics Data System (ADS)
Sugimoto, Norihiko
2017-04-01
Inertia-gravity wave (IGW) radiation from the elliptical vortex is investigated in the f-plane shallow water system. The far field of IGW is analytically derived for the case of an almost circular Kirchhoff vortex with a small aspect ratio. Cyclone-anticyclone asymmetry appears at finite values of the Rossby number (Ro) caused by the source originating in the Coriolis acceleration. While the intensity of IGWs from the cyclone monotonically decreases as f increases, that from the anticyclone increases as f increases for relatively smaller f and has a local maximum at intermediate f. A numerical experiment is conducted on a model using a spectral method in an unbounded domain. The numerical results agree quite well with the analytical ones for elliptical vortices with small aspect ratios, implying that the derived analytical forms are useful for the verification of the numerical model. For elliptical vortices with larger aspect ratios, however, significant deviation from the analytical estimates appears. The intensity of IGWs radiated in the numerical simulation is larger than that estimated analytically. The reason is that the source of IGWs is amplified during the time evolution because the shape of the vortex changes from ideal ellipse to elongated with filaments. Nevertheless, cyclone-anticyclone asymmetry similar to the analytical estimate appears in all the range of aspect ratios, suggesting that this asymmetry is a robust feature.
Heneedak, Hala M; Salama, Ismail; Mostafa, Samia; El-Kady, Ehab; El-Sadek, Mohamed
2015-07-01
The prerequisites for forensic confirmatory analysis by LC/MS/MS with respect to European Union guidelines are chromatographic separation, a minimum number of two MS/MS transitions to obtain the required identification points and predefined thresholds for the variability of the relative intensities of the MS/MS transitions (MRM transitions) in samples and reference standards. In the present study, a fast, sensitive and robust method to quantify tramadol, chlorpheniramine, dextromethorphan and their major metabolites, O-desmethyltramadol, dsmethyl-chlorpheniramine and dextrophan, respectively, in human plasma using ibuprofen as internal standard (IS) is described. The analytes and the IS were extracted from plasma by a liquid-liquid extraction method using ethyl acetate-diethyl-ether (1:1). Extracted samples were analyzed by ultra-high-performance liquid chromatography coupled to electrospray ionization tandem mass spectrometry (UHPLC-ESI-MS/MS). Chromatographic separation was performed by pumping the mobile phase containing acetonitrile, water and formic acid (89.2:11.7:0.1) for 2.0 min at a flow rate of 0.25 μL/min into a Hypersil-Gold C18 column, 20 × 2.0 mm (1.9 µm) from Thermoscientific, New York, USA. The calibration curve was linear for the six analytes. The intraday precision (RSD) and accuracy (RE) of the method were 3-9.8 and -1.7-4.5%, respectively. The analytical procedure herein described was used to assess the pharmacokinetics of the analytes in 24 healthy volunteers after a single oral dose containing 50 mg of tramadol hydrochloride, 3 mg chlorpheniramine maleate and 15 mg of dextromethorphan hydrobromide. Copyright © 2014 John Wiley & Sons, Ltd.
Steroid hormones in environmental matrices: extraction method comparison.
Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon
2017-11-09
The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.
On Multifunctional Collaborative Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2001-01-01
Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.
Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.
Li, Ben; Stenstrom, M K
2014-01-01
One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.
Advances in high-resolution mass spectrometry based on metabolomics studies for food--a review.
Rubert, Josep; Zachariasova, Milena; Hajslova, Jana
2015-01-01
Food authenticity becomes a necessity for global food policies, since food placed in the market without fail has to be authentic. It has always been a challenge, since in the past minor components, called also markers, have been mainly monitored by chromatographic methods in order to authenticate the food. Nevertheless, nowadays, advanced analytical methods have allowed food fingerprints to be achieved. At the same time they have been also combined with chemometrics, which uses statistical methods in order to verify food and to provide maximum information by analysing chemical data. These sophisticated methods based on different separation techniques or stand alone have been recently coupled to high-resolution mass spectrometry (HRMS) in order to verify the authenticity of food. The new generation of HRMS detectors have experienced significant advances in resolving power, sensitivity, robustness, extended dynamic range, easier mass calibration and tandem mass capabilities, making HRMS more attractive and useful to the food metabolomics community, therefore becoming a reliable tool for food authenticity. The purpose of this review is to summarise and describe the most recent metabolomics approaches in the area of food metabolomics, and to discuss the strengths and drawbacks of the HRMS analytical platforms combined with chemometrics.
Mawson, Deborah H; Jeffrey, Keon L; Teale, Philip; Grace, Philip B
2018-06-19
A rapid, accurate and robust method for the determination of catechin (C), epicatechin (EC), gallocatechin (GC), epigallocatechin (EGC), catechin gallate (Cg), epicatechin gallate (ECg), gallocatechin gallate (GCg) and epigallocatechin gallate (EGCg) concentrations in human plasma has been developed. The method utilises protein precipitation following enzyme hydrolysis, with chromatographic separation and detection using reversed-phase liquid chromatography - tandem mass spectrometry (LC-MS/MS). Traditional issues such as lengthy chromatographic run times, sample and extract stability, and lack of suitable internal standards have been addressed. The method has been evaluated using a comprehensive validation procedure, confirming linearity over appropriate concentration ranges, and inter/intra batch precision and accuracies within suitable thresholds (precisions within 13.8% and accuracies within 12.4%). Recoveries of analytes were found to be consistent between different matrix samples, compensated for using suitable internal markers and within the performance of the instrumentation used. Similarly, chromatographic interferences have been corrected using the internal markers selected. Stability of all analytes in matrix is demonstrated over 32 days and throughout extraction conditions. This method is suitable for high throughput sample analysis studies. This article is protected by copyright. All rights reserved.
Fernandes, Ana Josane Dantas; Ferreira, Magda Rhayanny Assunção; Randau, Karina Perrelli; de Souza, Tatiane Pereira; Soares, Luiz Alberto Lira
2012-01-01
The aim of this work was to evaluate the spectrophotometric methodology for determining the total flavonoid content (TFC) in herbal drug and derived products from Bauhinia monandra Kurz. Several analytical parameters from this method grounded on the complex formed between flavonoids and AlCl₃ were evaluated such as herbal amount (0.25 to 1.25 g); solvent composition (ethanol 40 to 80%, v/v); as well as the reaction time and AlCl₃ concentration (2 to 9%, w/v). The method was adjusted to aqueous extractives and its performance studied through precision, linearity and preliminary robustness. The results showed an important dependence of the method response from reaction time, AlCl₃ concentration, sample amount, and solvent mixture. After choosing the optimized condition, the method was applied for the matrixes (herbal material and extractives), showing precision lower than 5% (for both parameters repeatability and intermediate precision), coefficient of determination higher than 0.99, and no important influence could be observed for slight variations from wavelength or AlCl₃ concentration. Thus, it could be concluded that the evaluated analytical procedure was suitable to quantify the total flavonoid content in raw material and aqueous extractives from leaves of B. monandra.
Measuring salivary analytes from free-ranging monkeys
Higham, James P.; Vitale, Alison; Rivera, Adaris Mas; Ayala, James E.; Maestripieri, Dario
2014-01-01
Studies of large free-ranging mammals have been revolutionized by non-invasive methods for assessing physiology, which usually involve the measurement of fecal or urinary biomarkers. However, such techniques are limited by numerous factors. To expand the range of physiological variables measurable non-invasively from free-ranging primates, we developed techniques for sampling monkey saliva by offering monkeys ropes with oral swabs sewn on the ends. We evaluated different attractants for encouraging individuals to offer samples, and proportions of individuals in different age/sex categories willing to give samples. We tested the saliva samples we obtained in three commercially available assays: cortisol, Salivary Alpha Amylase, and Secretory Immunoglobulin A. We show that habituated free-ranging rhesus macaques will give saliva samples voluntarily without training, with 100% of infants, and over 50% of adults willing to chew on collection devices. Our field methods are robust even for analytes that show poor recovery from cotton, and/or that have concentrations dependent on salivary flow rate. We validated the cortisol and SAA assays for use in rhesus macaques by showing aspects of analytical validation, such as that samples dilute linearly and in parallel to assay standards. We also found that values measured correlated with biologically meaningful characteristics of sampled individuals (age and dominance rank). The SIgA assay tested did not react to samples. Given the wide range of analytes measurable in saliva but not in feces or urine, our methods considerably improve our ability to study physiological aspects of the behavior and ecology of free-ranging primates, and are also potentially adaptable to other mammalian taxa. PMID:20837036
NASA Astrophysics Data System (ADS)
Theis, L. S.; Motzoi, F.; Wilhelm, F. K.
2016-01-01
We present a few-parameter ansatz for pulses to implement a broad set of simultaneous single-qubit rotations in frequency-crowded multilevel systems. Specifically, we consider a system of two qutrits whose working and leakage transitions suffer from spectral crowding (detuned by δ ). In order to achieve precise controllability, we make use of two driving fields (each having two quadratures) at two different tones to simultaneously apply arbitrary combinations of rotations about axes in the X -Y plane to both qubits. Expanding the waveforms in terms of Hanning windows, we show how analytic pulses containing smooth and composite-pulse features can easily achieve gate errors less than 10-4 and considerably outperform known adiabatic techniques. Moreover, we find a generalization of the WAHWAH (Weak AnHarmonicity With Average Hamiltonian) method by Schutjens et al. [R. Schutjens, F. A. Dagga, D. J. Egger, and F. K. Wilhelm, Phys. Rev. A 88, 052330 (2013)], 10.1103/PhysRevA.88.052330 that allows precise separate single-qubit rotations for all gate times beyond a quantum speed limit. We find in all cases a quantum speed limit slightly below 2 π /δ for the gate time and show that our pulses are robust against variations in system parameters and filtering due to transfer functions, making them suitable for experimental implementations.
Verdirame, Maria; Veneziano, Maria; Alfieri, Anna; Di Marco, Annalise; Monteagudo, Edith; Bonelli, Fabio
2010-03-11
Turbulent Flow Chromatography (TFC) is a powerful approach for on-line extraction in bioanalytical studies. It improves sensitivity and reduces sample preparation time, two factors that are of primary importance in drug discovery. In this paper the application of the ARIA system to the analytical support of in vivo pharmacokinetics (PK) and in vitro drug metabolism studies is described, with an emphasis in high throughput optimization. For PK studies, a comparison between acetonitrile plasma protein precipitation (APPP) and TFC was carried out. Our optimized TFC methodology gave better S/N ratios and lower limit of quantification (LOQ) than conventional procedures. A robust and high throughput analytical method to support hepatocyte metabolic stability screening of new chemical entities was developed by hyphenation of TFC with mass spectrometry. An in-loop dilution injection procedure was implemented to overcome one of the main issues when using TFC, that is the early elution of hydrophilic compounds that renders low recoveries. A comparison between off-line solid phase extraction (SPE) and TFC was also carried out, and recovery, sensitivity (LOQ), matrix effect and robustness were evaluated. The use of two parallel columns in the configuration of the system provided a further increase of the throughput. Copyright 2009 Elsevier B.V. All rights reserved.
Counterfeit drugs: analytical techniques for their identification.
Martino, R; Malet-Martino, M; Gilard, V; Balayssac, S
2010-09-01
In recent years, the number of counterfeit drugs has increased dramatically, including not only "lifestyle" products but also vital medicines. Besides the threat to public health, the financial and reputational damage to pharmaceutical companies is substantial. The lack of robust information on the prevalence of fake drugs is an obstacle in the fight against drug counterfeiting. It is generally accepted that approximately 10% of drugs worldwide could be counterfeit, but it is also well known that this number covers very different situations depending on the country, the places where the drugs are purchased, and the definition of what constitutes a counterfeit drug. The chemical analysis of drugs suspected to be fake is a crucial step as counterfeiters are becoming increasingly sophisticated, rendering visual inspection insufficient to distinguish the genuine products from the counterfeit ones. This article critically reviews the recent analytical methods employed to control the quality of drug formulations, using as an example artemisinin derivatives, medicines particularly targeted by counterfeiters. Indeed, a broad panel of techniques have been reported for their analysis, ranging from simple and cheap in-field ones (colorimetry and thin-layer chromatography) to more advanced laboratory methods (mass spectrometry, nuclear magnetic resonance, and vibrational spectroscopies) through chromatographic methods, which remain the most widely used. The conclusion section of the article highlights the questions to be posed before selecting the most appropriate analytical approach.
Fakayode, Sayo O; Mitchell, Breanna S; Pollard, David A
2014-08-01
Accurate understanding of analyte boiling points (BP) is of critical importance in gas chromatographic (GC) separation and crude oil refinery operation in petrochemical industries. This study reported the first combined use of GC separation and partial-least-square (PLS1) multivariate regression analysis of petrochemical structural activity relationship (SAR) for accurate BP determination of two commercially available (D3710 and MA VHP) calibration gas mix samples. The results of the BP determination using PLS1 multivariate regression were further compared with the results of traditional simulated distillation method of BP determination. The developed PLS1 regression was able to correctly predict analytes BP in D3710 and MA VHP calibration gas mix samples, with a root-mean-square-%-relative-error (RMS%RE) of 6.4%, and 10.8% respectively. In contrast, the overall RMS%RE of 32.9% and 40.4%, respectively obtained for BP determination in D3710 and MA VHP using a traditional simulated distillation method were approximately four times larger than the corresponding RMS%RE of BP prediction using MRA, demonstrating the better predictive ability of MRA. The reported method is rapid, robust, and promising, and can be potentially used routinely for fast analysis, pattern recognition, and analyte BP determination in petrochemical industries. Copyright © 2014 Elsevier B.V. All rights reserved.
Mischak, Harald; Vlahou, Antonia; Ioannidis, John P A
2013-04-01
Mass spectrometry platforms have attracted a lot of interest in the last 2 decades as profiling tools for native peptides and proteins with clinical potential. However, limitations associated with reproducibility and analytical robustness, especially pronounced with the initial SELDI systems, hindered the application of such platforms in biomarker qualification and clinical implementation. The scope of this article is to give a short overview on data available on performance and on analytical robustness of the different platforms for peptide profiling. Using the CE-MS platform as a paradigm, data on analytical performance are described including reproducibility (short-term and intermediate repeatability), stability, interference, quantification capabilities (limits of detection), and inter-laboratory variability. We discuss these issues by using as an example our experience with the development of a 273-peptide marker for chronic kidney disease. Finally, we discuss pros and cons and means for improvement and emphasize the need to test in terms of comparative clinical performance and impact, different platforms that pass reasonably well analytical validation tests. Copyright © 2012 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Smith, David; Španěl, Patrik
2016-06-01
This article reflects our observations of recent accomplishments made using selected ion flow tube MS (SIFT-MS). Only brief descriptions are given of SIFT-MS as an analytical method and of the recent extensions to the underpinning analytical ion chemistry required to realize more robust analyses. The challenge of breath analysis is given special attention because, when achieved, it renders analysis of other air media relatively straightforward. Brief overviews are given of recent SIFT-MS breath analyses by leading research groups, noting the desirability of detection and quantification of single volatile biomarkers rather than reliance on statistical analyses, if breath analysis is to be accepted into clinical practice. A 'strengths, weaknesses, opportunities and threats' analysis of SIFT-MS is made, which should help to increase its utility for trace gas analysis.
Analytical validation of a psychiatric pharmacogenomic test.
Jablonski, Michael R; King, Nina; Wang, Yongbao; Winner, Joel G; Watterson, Lucas R; Gunselman, Sandra; Dechairo, Bryan M
2018-05-01
The aim of this study was to validate the analytical performance of a combinatorial pharmacogenomics test designed to aid in the appropriate medication selection for neuropsychiatric conditions. Genomic DNA was isolated from buccal swabs. Twelve genes (65 variants/alleles) associated with psychotropic medication metabolism, side effects, and mechanisms of actions were evaluated by bead array, MALDI-TOF mass spectrometry, and/or capillary electrophoresis methods (GeneSight Psychotropic, Assurex Health, Inc.). The combinatorial pharmacogenomics test has a dynamic range of 2.5-20 ng/μl of input genomic DNA, with comparable performance for all assays included in the test. Both the precision and accuracy of the test were >99.9%, with individual gene components between 99.4 and 100%. This study demonstrates that the combinatorial pharmacogenomics test is robust and reproducible, making it suitable for clinical use.
Gradient Optimization for Analytic conTrols - GOAT
NASA Astrophysics Data System (ADS)
Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.
Di Girolamo, Francesco; Masotti, Andrea; Salvatori, Guglielmo; Scapaticci, Margherita; Muraca, Maurizio; Putignani, Lorenza
2014-01-01
She-donkey’s milk (DM) and goat’s milk (GM) are commonly used in newborn and infant feeding because they are less allergenic than other milk types. It is, therefore, mandatory to avoid adulteration and contamination by other milk allergens, developing fast and efficient analytical methods to assess the authenticity of these precious nutrients. In this experimental work, a sensitive and robust matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) profiling was designed to assess the genuineness of DM and GM milks. This workflow allows the identification of DM and GM adulteration at levels of 0.5%, thus, representing a sensitive tool for milk adulteration analysis, if compared with other laborious and time-consuming analytical procedures. PMID:25110863
Song, Yuelin; Song, Qingqing; Li, Jun; Zheng, Jiao; Li, Chun; Zhang, Yuan; Zhang, Lingling; Jiang, Yong; Tu, Pengfei
2016-07-08
Direct analysis is of great importance to understand the real chemical profile of a given sample, notably biological materials, because either chemical degradation or diverse errors and uncertainties might be resulted from sophisticated protocols. In comparison with biofluids, it is still challenging for direct analysis of solid biological samples using high performance liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Herein, a new analytical platform was configured by online hyphenating pressurized liquid extraction (PLE), turbulent flow chromatography (TFC), and LC-MS/MS. A facile, but robust PLE module was constructed based on the phenomenon that noticeable back-pressure can be generated during rapid fluid passing through a narrow tube. TFC column that is advantageous at extracting low molecular analytes from rushing fluid was employed to link at the outlet of the PLE module to capture constituents-of-interest. An electronic 6-port/2-position valve was introduced between TFC column and LC-MS/MS to fragment each measurement into extraction and elution phases, whereas LC-MS/MS took the charge of analyte separation and monitoring. As a proof of concept, simultaneous determination of 24 endogenous substances including eighteen steroids, five eicosanoids, and one porphyrin in feces was carried out in this paper. Method validation assays demonstrated the analytical platform to be qualified for directly simultaneous measurement of diverse endogenous analytes in fecal matrices. Application of this integrated platform on homolog-focused profiling of feces is discussed in a companion paper. Copyright © 2016 Elsevier B.V. All rights reserved.
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
2016-01-01
Rifaximin is an oral nonabsorbable antibiotic that acts locally in the gastrointestinal tract with minimal systemic adverse effects. It does not have spectrophotometric method ecofriendly in the ultraviolet region described in official compendiums and literature. The analytical techniques for determination of rifaximin reported in the literature require large amount of time to release results and are significantly onerous. Furthermore, they use toxic reagents both for the operator and environment and, therefore, cannot be considered environmentally friendly analytical techniques. The objective of this study was to develop and validate an ecofriendly spectrophotometric method in the ultraviolet region to quantify rifaximin in tablets. The method was validated, showing linearity, selectivity, precision, accuracy, and robustness. It was linear over the concentration range of 10–30 mg L−1 with correlation coefficients greater than 0.9999 and limits of detection and quantification of 1.39 and 4.22 mg L−1, respectively. The validated method is useful and applied for the routine quality control of rifaximin, since it is simple with inexpensive conditions and fast in the release of results, optimizes analysts and equipment, and uses environmentally friendly solvents, being considered a green method, which does not prejudice either the operator or the environment. PMID:27429835
Duodu, Godfred Odame; Goonetilleke, Ashantha; Allen, Charlotte; Ayoko, Godwin A
2015-10-22
Wet-milling protocol was employed to produce pressed powder tablets with excellent cohesion and homogeneity suitable for laser ablation (LA) analysis of volatile and refractive elements in sediment. The influence of sample preparation on analytical performance was also investigated, including sample homogeneity, accuracy and limit of detection. Milling in volatile solvent for 40 min ensured sample is well mixed and could reasonably recover both volatile (Hg) and refractive (Zr) elements. With the exception of Cr (-52%) and Nb (+26%) major, minor and trace elements in STSD-1 and MESS-3 could be analysed within ±20% of the certified values. Comparison of the method with total digestion method using HF was tested by analysing 10 different sediment samples. The laser method recovers significantly higher amounts of analytes such as Ag, Cd, Sn and Sn than the total digestion method making it a more robust method for elements across the periodic table. LA-ICP-MS also eliminates the interferences from chemical reagents as well as the health and safety risks associated with digestion processes. Therefore, it can be considered as an enhanced method for the analysis of heterogeneous matrices such as river sediments. Copyright © 2015 Elsevier B.V. All rights reserved.
White, Helen E; Hedges, John; Bendit, Israel; Branford, Susan; Colomer, Dolors; Hochhaus, Andreas; Hughes, Timothy; Kamel-Reid, Suzanne; Kim, Dong-Wook; Modur, Vijay; Müller, Martin C; Pagnano, Katia B; Pane, Fabrizio; Radich, Jerry; Cross, Nicholas C P; Labourier, Emmanuel
2013-06-01
Current guidelines for managing Philadelphia-positive chronic myeloid leukemia include monitoring the expression of the BCR-ABL1 (breakpoint cluster region/c-abl oncogene 1, non-receptor tyrosine kinase) fusion gene by quantitative reverse-transcription PCR (RT-qPCR). Our goal was to establish and validate reference panels to mitigate the interlaboratory imprecision of quantitative BCR-ABL1 measurements and to facilitate global standardization on the international scale (IS). Four-level secondary reference panels were manufactured under controlled and validated processes with synthetic Armored RNA Quant molecules (Asuragen) calibrated to reference standards from the WHO and the NIST. Performance was evaluated in IS reference laboratories and with non-IS-standardized RT-qPCR methods. For most methods, percent ratios for BCR-ABL1 e13a2 and e14a2 relative to ABL1 or BCR were robust at 4 different levels and linear over 3 logarithms, from 10% to 0.01% on the IS. The intraassay and interassay imprecision was <2-fold overall. Performance was stable across 3 consecutive lots, in multiple laboratories, and over a period of 18 months to date. International field trials demonstrated the commutability of the reagents and their accurate alignment to the IS within the intra- and interlaboratory imprecision of IS-standardized methods. The synthetic calibrator panels are robust, reproducibly manufactured, analytically calibrated to the WHO primary standards, and compatible with most BCR-ABL1 RT-qPCR assay designs. The broad availability of secondary reference reagents will further facilitate interlaboratory comparative studies and independent quality assessment programs, which are of paramount importance for worldwide standardization of BCR-ABL1 monitoring results and the optimization of current and new therapeutic approaches for chronic myeloid leukemia. © 2013 American Association for Clinical Chemistry.
Optical information encryption based on incoherent superposition with the help of the QR code
NASA Astrophysics Data System (ADS)
Qin, Yi; Gong, Qiong
2014-01-01
In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.
de Lima Stebbins, Daniela; Docs, Jon; Lowe, Paula; Cohen, Jason; Lei, Hongxia
2016-05-18
The hormones listed in the screening survey list 2 of the Unregulated Contaminant Monitoring Rule 3 (estrone, 17-β-estradiol, 17-α-ethynylestradiol, 16-α-hydroxyestradiol (estriol), equilin, testosterone and 4-androstene-3,17-dione) were analyzed by liquid chromatography electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS). Two analytical methods were compared: EPA method 539 and the isotope dilution method. EPA method 539 was successfully utilized in river and drinking water matrices with fortified recoveries of 98.9 to 108.5%. Samples from the Hillsborough River reflected levels below the method detection limit (MDL) for the majority of the analytes, except estrone (E1), which was detected at very low concentrations (<0.5 to 1 ng L(-1)) in the majority of samples. No hormones were detected in drinking water samples. The isotope dilution method was used to analyze reclaimed and aquifer storage and recovery (ASR) water samples as a result of strong matrix/solid phase extraction (SPE) losses observed in these more complex matrices. Most of the compounds were not detected or found at relatively low concentrations in the ASR samples. Attenuation of 50 to 99.1% was observed as a result of the ASR recharge/recovery cycles for most of the hormones, except for estriol (E3). Relatively stable concentrations of E3 were found, with only 10% attenuation at one of the sites and no measureable attenuation at another location. These results have substantiated that while EPA method 539 works well for most environmental samples, the isotope dilution method is more robust when dealing with complex matrices such as reclaimed and ASR samples.
Salvatierra Virgen, Sara; Ceballos-Magaña, Silvia Guillermina; Salvatierra-Stamp, Vilma Del Carmen; Sumaya-Martínez, Maria Teresa; Martínez-Martínez, Francisco Javier; Muñiz-Valencia, Roberto
2017-12-01
In recent years, there has been an increased concern about the presence of toxic compounds derived from the Maillard reaction produced during food cooking at high temperatures. The main toxic compounds derived from this reaction are acrylamide and hydroxymethylfurfural (HMF). The majority of analytical methods require sample treatments using solvents which are highly polluting for the environment. The difficulty of quantifying HMF in complex fried food matrices encourages the development of new analytical methods. This paper provides a rapid, sensitive and environmentally-friendly analytical method for the quantification of HMF in corn chips using HPLC-DAD. Chromatographic separation resulted in a baseline separation for HMF in 3.7 min. Sample treatment for corn chip samples first involved a leaching process using water and afterwards a solid-phase extraction (SPE) using HLB-Oasis polymeric cartridges. Sample treatment optimisation was carried out by means of Box-Behnken fractional factorial design and Response Surface Methodolog y to examine the effects of four variables (sample weight, pH, sonication time and elution volume) on HMF extraction from corn chips. The SPE-HPLC-DAD method was validated. The limits of detection and quantification were 0.82 and 2.20 mg kg -1 , respectively. Method precision was evaluated in terms of repeatability and reproducibility as relative standard deviations (RSDs) using three concentration levels. For repeatability, RSD values were 6.9, 3.6 and 2.0%; and for reproducibility 18.8, 7.9 and 2.9%. For a ruggedness study the Yuden test was applied and the result demonstrated the method as robust. The method was successfully applied to different corn chip samples.
Robustness and structure of complex networks
NASA Astrophysics Data System (ADS)
Shao, Shuai
This dissertation covers the two major parts of my PhD research on statistical physics and complex networks: i) modeling a new type of attack -- localized attack, and investigating robustness of complex networks under this type of attack; ii) discovering the clustering structure in complex networks and its influence on the robustness of coupled networks. Complex networks appear in every aspect of our daily life and are widely studied in Physics, Mathematics, Biology, and Computer Science. One important property of complex networks is their robustness under attacks, which depends crucially on the nature of attacks and the structure of the networks themselves. Previous studies have focused on two types of attack: random attack and targeted attack, which, however, are insufficient to describe many real-world damages. Here we propose a new type of attack -- localized attack, and study the robustness of complex networks under this type of attack, both analytically and via simulation. On the other hand, we also study the clustering structure in the network, and its influence on the robustness of a complex network system. In the first part, we propose a theoretical framework to study the robustness of complex networks under localized attack based on percolation theory and generating function method. We investigate the percolation properties, including the critical threshold of the phase transition pc and the size of the giant component Pinfinity. We compare localized attack with random attack and find that while random regular (RR) networks are more robust against localized attack, Erdoḧs-Renyi (ER) networks are equally robust under both types of attacks. As for scale-free (SF) networks, their robustness depends crucially on the degree exponent lambda. The simulation results show perfect agreement with theoretical predictions. We also test our model on two real-world networks: a peer-to-peer computer network and an airline network, and find that the real-world networks are much more vulnerable to localized attack compared with random attack. In the second part, we extend the tree-like generating function method to incorporating clustering structure in complex networks. We study the robustness of a complex network system, especially a network of networks (NON) with clustering structure in each network. We find that the system becomes less robust as we increase the clustering coefficient of each network. For a partially dependent network system, we also find that the influence of the clustering coefficient on network robustness decreases as we decrease the coupling strength, and the critical coupling strength qc, at which the first-order phase transition changes to second-order, increases as we increase the clustering coefficient.
Decomposing Multifractal Crossovers
Nagy, Zoltan; Mukli, Peter; Herman, Peter; Eke, Andras
2017-01-01
Physiological processes—such as, the brain's resting-state electrical activity or hemodynamic fluctuations—exhibit scale-free temporal structuring. However, impacts common in biological systems such as, noise, multiple signal generators, or filtering by transport function, result in multimodal scaling that cannot be reliably assessed by standard analytical tools that assume unimodal scaling. Here, we present two methods to identify breakpoints or crossovers in multimodal multifractal scaling functions. These methods incorporate the robust iterative fitting approach of the focus-based multifractal formalism (FMF). The first approach (moment-wise scaling range adaptivity) allows for a breakpoint-based adaptive treatment that analyzes segregated scale-invariant ranges. The second method (scaling function decomposition method, SFD) is a crossover-based design aimed at decomposing signal constituents from multimodal scaling functions resulting from signal addition or co-sampling, such as, contamination by uncorrelated fractals. We demonstrated that these methods could handle multimodal, mono- or multifractal, and exact or empirical signals alike. Their precision was numerically characterized on ideal signals, and a robust performance was demonstrated on exemplary empirical signals capturing resting-state brain dynamics by near infrared spectroscopy (NIRS), electroencephalography (EEG), and blood oxygen level-dependent functional magnetic resonance imaging (fMRI-BOLD). The NIRS and fMRI-BOLD low-frequency fluctuations were dominated by a multifractal component over an underlying biologically relevant random noise, thus forming a bimodal signal. The crossover between the EEG signal components was found at the boundary between the δ and θ bands, suggesting an independent generator for the multifractal δ rhythm. The robust implementation of the SFD method should be regarded as essential in the seamless processing of large volumes of bimodal fMRI-BOLD imaging data for the topology of multifractal metrics free of the masking effect of the underlying random noise. PMID:28798694
Elzanfaly, Eman S; Hegazy, Maha A; Saad, Samah S; Salem, Maissa Y; Abd El Fattah, Laila E
2015-03-01
The introduction of sustainable development concepts to analytical laboratories has recently gained interest, however, most conventional high-performance liquid chromatography methods do not consider either the effect of the used chemicals or the amount of produced waste on the environment. The aim of this work was to prove that conventional methods can be replaced by greener ones with the same analytical parameters. The suggested methods were designed so that they neither use nor produce harmful chemicals and produce minimum waste to be used in routine analysis without harming the environment. This was achieved by using green mobile phases and short run times. Four mixtures were chosen as models for this study; clidinium bromide/chlordiazepoxide hydrochloride, phenobarbitone/pipenzolate bromide, mebeverine hydrochloride/sulpiride, and chlorphenoxamine hydrochloride/caffeine/8-chlorotheophylline either in their bulk powder or in their dosage forms. The methods were validated with respect to linearity, precision, accuracy, system suitability, and robustness. The developed methods were compared to the reported conventional high-performance liquid chromatography methods regarding their greenness profile. The suggested methods were found to be greener and more time- and solvent-saving than the reported ones; hence they can be used for routine analysis of the studied mixtures without harming the environment. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munson, Matthew S.; Karp, Eric M.; Nimlos, Claire T.
Biomass conversion processes such as pretreatment, liquefaction, and pyrolysis often produce complex mixtures of intermediates that are a substantial challenge to analyze rapidly and reliably. To characterize these streams more comprehensively and efficiently, new techniques are needed to track species through biomass deconstruction and conversion processes. Here, we present the application of an emerging analytical method, gradient elution moving boundary electrophoresis (GEMBE), to quantify a suite of acids in a complex, biomass-derived streams from alkaline pretreatment of corn stover. GEMBE offers distinct advantages over common chromatography-spectrometry analytical approaches in terms of analysis time, sample preparation requirements, and cost of equipment.more » As demonstrated here, GEMBE is able to track 17 distinct compounds (oxalate, formate, succinate, malate, acetate, glycolate, protocatechuate, 3-hydroxypropanoate, lactate, glycerate, 2-hydroxybutanoate, 4-hydroxybenzoate, vanillate, p-coumarate, ferulate, sinapate, and acetovanillone). The lower limit of detection was compound dependent and ranged between 0.9 and 3.5 umol/L. Results from GEMBE were similar to recent results from an orthogonal method based on GCxGC-TOF/MS. Altogether, GEMBE offers a rapid, robust approach to analyze complex biomass-derived samples, and given the ease and convenience of deployment, may offer an analytical solution for online tracking of multiple types of biomass streams.« less
4-Nonylphenol (NP) in food-contact materials: analytical methodology and occurrence.
Fernandes, A R; Rose, M; Charlton, C
2008-03-01
Nonylphenol is a recognized environmental contaminant, but it is unclear whether its occurrence in food arises only through environmental pathways or also during the processing or packaging of food, as there are reports that indicate that materials in contact with food such as rubber products and polyvinylchloride wraps can contain nonylphenol. A review of the literature has highlighted the scarcity of robust analytical methodology or data on the occurrence of nonylphenol in packaging materials. This paper describes a methodology for the determination of nonylphenol in a variety of packaging materials, which includes plastics, paper and rubber. The method uses either Soxhlet extraction or dissolution followed by solvent extraction (depending on the material type), followed by purification using adsorption chromatography. Procedures were internally standardized using 13C-labelled nonylphenol and the analytes were measured by gas chromatography-mass spectrometry. The method is validated and data relating to quality parameters such as limits of detection, recovery, precision and linearity of measurement are provided. Analysis of a range of 25 food-contact materials found nonylphenol at concentrations of 64-287 microg g(-1) in some polystyrene and polyvinylchloride samples. Far lower concentrations (<0.03-1.4 microg g(-1)) were detected in the other materials. It is possible that occurrence at the higher levels has the potential for migration to food.
Munson, Matthew S.; Karp, Eric M.; Nimlos, Claire T.; ...
2016-09-27
Biomass conversion processes such as pretreatment, liquefaction, and pyrolysis often produce complex mixtures of intermediates that are a substantial challenge to analyze rapidly and reliably. To characterize these streams more comprehensively and efficiently, new techniques are needed to track species through biomass deconstruction and conversion processes. Here, we present the application of an emerging analytical method, gradient elution moving boundary electrophoresis (GEMBE), to quantify a suite of acids in a complex, biomass-derived streams from alkaline pretreatment of corn stover. GEMBE offers distinct advantages over common chromatography-spectrometry analytical approaches in terms of analysis time, sample preparation requirements, and cost of equipment.more » As demonstrated here, GEMBE is able to track 17 distinct compounds (oxalate, formate, succinate, malate, acetate, glycolate, protocatechuate, 3-hydroxypropanoate, lactate, glycerate, 2-hydroxybutanoate, 4-hydroxybenzoate, vanillate, p-coumarate, ferulate, sinapate, and acetovanillone). The lower limit of detection was compound dependent and ranged between 0.9 and 3.5 umol/L. Results from GEMBE were similar to recent results from an orthogonal method based on GCxGC-TOF/MS. Altogether, GEMBE offers a rapid, robust approach to analyze complex biomass-derived samples, and given the ease and convenience of deployment, may offer an analytical solution for online tracking of multiple types of biomass streams.« less
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.
2017-09-01
In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.
An analytical method for computing atomic contact areas in biomolecules.
Mach, Paul; Koehl, Patrice
2013-01-15
We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.
Li, Mingjie; Zhou, Ping; Zhao, Zhicheng; Zhang, Jinggang
2016-03-01
Recently, fractional order (FO) processes with dead-time have attracted more and more attention of many researchers in control field, but FO-PID controllers design techniques available for the FO processes with dead-time suffer from lack of direct systematic approaches. In this paper, a simple design and parameters tuning approach of two-degree-of-freedom (2-DOF) FO-PID controller based on internal model control (IMC) is proposed for FO processes with dead-time, conventional one-degree-of-freedom control exhibited the shortcoming of coupling of robustness and dynamic response performance. 2-DOF control can overcome the above weakness which means it realizes decoupling of robustness and dynamic performance from each other. The adjustable parameter η2 of FO-PID controller is directly related to the robustness of closed-loop system, and the analytical expression is given between the maximum sensitivity specification Ms and parameters η2. In addition, according to the dynamic performance requirement of the practical system, the parameters η1 can also be selected easily. By approximating the dead-time term of the process model with the first-order Padé or Taylor series, the expressions for 2-DOF FO-PID controller parameters are derived for three classes of FO processes with dead-time. Moreover, compared with other methods, the proposed method is simple and easy to implement. Finally, the simulation results are given to illustrate the effectiveness of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Three-Component Soliton States in Spinor F =1 Bose-Einstein Condensates
NASA Astrophysics Data System (ADS)
Bersano, T. M.; Gokhroo, V.; Khamehchi, M. A.; D'Ambroise, J.; Frantzeskakis, D. J.; Engels, P.; Kevrekidis, P. G.
2018-02-01
Dilute-gas Bose-Einstein condensates are an exceptionally versatile test bed for the investigation of novel solitonic structures. While matter-wave solitons in one- and two-component systems have been the focus of intense research efforts, an extension to three components has never been attempted in experiments. Here, we experimentally demonstrate the existence of robust dark-bright-bright (DBB) and dark-dark-bright solitons in a multicomponent F =1 condensate. We observe lifetimes on the order of hundreds of milliseconds for these structures. Our theoretical analysis, based on a multiscale expansion method, shows that small-amplitude solitons of these types obey universal long-short wave resonant interaction models, namely, Yajima-Oikawa systems. Our experimental and analytical findings are corroborated by direct numerical simulations highlighting the persistence of, e.g., the DBB soliton states, as well as their robust oscillations in the trap.
Three-Component Soliton States in Spinor F=1 Bose-Einstein Condensates.
Bersano, T M; Gokhroo, V; Khamehchi, M A; D'Ambroise, J; Frantzeskakis, D J; Engels, P; Kevrekidis, P G
2018-02-09
Dilute-gas Bose-Einstein condensates are an exceptionally versatile test bed for the investigation of novel solitonic structures. While matter-wave solitons in one- and two-component systems have been the focus of intense research efforts, an extension to three components has never been attempted in experiments. Here, we experimentally demonstrate the existence of robust dark-bright-bright (DBB) and dark-dark-bright solitons in a multicomponent F=1 condensate. We observe lifetimes on the order of hundreds of milliseconds for these structures. Our theoretical analysis, based on a multiscale expansion method, shows that small-amplitude solitons of these types obey universal long-short wave resonant interaction models, namely, Yajima-Oikawa systems. Our experimental and analytical findings are corroborated by direct numerical simulations highlighting the persistence of, e.g., the DBB soliton states, as well as their robust oscillations in the trap.
NASA Astrophysics Data System (ADS)
Adam, A. M. A.; Bashier, E. B. M.; Hashim, M. H. A.; Patidar, K. C.
2017-07-01
In this work, we design and analyze a fitted numerical method to solve a reaction-diffusion model with time delay, namely, a delayed version of a population model which is an extension of the logistic growth (LG) equation for a food-limited population proposed by Smith [F.E. Smith, Population dynamics in Daphnia magna and a new model for population growth, Ecology 44 (1963) 651-663]. Seeing that the analytical solution (in closed form) is hard to obtain, we seek for a robust numerical method. The method consists of a Fourier-pseudospectral semi-discretization in space and a fitted operator implicit-explicit scheme in temporal direction. The proposed method is analyzed for convergence and we found that it is unconditionally stable. Illustrative numerical results will be presented at the conference.
Model-based ultrasound temperature visualization during and following HIFU exposure.
Ye, Guoliang; Smith, Penny Probert; Noble, J Alison
2010-02-01
This paper describes the application of signal processing techniques to improve the robustness of ultrasound feedback for displaying changes in temperature distribution in treatment using high-intensity focused ultrasound (HIFU), especially at the low signal-to-noise ratios that might be expected in in vivo abdominal treatment. Temperature estimation is based on the local displacements in ultrasound images taken during HIFU treatment, and a method to improve robustness to outliers is introduced. The main contribution of the paper is in the application of a Kalman filter, a statistical signal processing technique, which uses a simple analytical temperature model of heat dispersion to improve the temperature estimation from the ultrasound measurements during and after HIFU exposure. To reduce the sensitivity of the method to previous assumptions on the material homogeneity and signal-to-noise ratio, an adaptive form is introduced. The method is illustrated using data from HIFU exposure of ex vivo bovine liver. A particular advantage of the stability it introduces is that the temperature can be visualized not only in the intervals between HIFU exposure but also, for some configurations, during the exposure itself. 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Ihssane, B; Bouchafra, H; El Karbane, M; Azougagh, M; Saffaj, T
2016-05-01
We propose in this work an efficient way to evaluate the measurement of uncertainty at the end of the development step of an analytical method, since this assessment provides an indication of the performance of the optimization process. The estimation of the uncertainty is done through a robustness test by applying a Placquett-Burman design, investigating six parameters influencing the simultaneous chromatographic assay of five water-soluble vitamins. The estimated effects of the variation of each parameter are translated into standard uncertainty value at each concentration level. The values obtained of the relative uncertainty do not exceed the acceptance limit of 5%, showing that the procedure development was well done. In addition, a statistical comparison conducted to compare standard uncertainty after the development stage and those of the validation step indicates that the estimated uncertainty are equivalent. The results obtained show clearly the performance and capacity of the chromatographic method to simultaneously assay the five vitamins and suitability for use in routine application. Copyright © 2015 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko; Ozawa, Takeaki
2018-05-31
Body fluid (BF) identification is a critical part of a criminal investigation because of its ability to suggest how the crime was committed and to provide reliable origins of DNA. In contrast to current methods using serological and biochemical techniques, vibrational spectroscopic approaches provide alternative advantages for forensic BF identification, such as non-destructivity and versatility for various BF types and analytical interests. However, unexplored issues remain for its practical application to forensics; for example, a specific BF needs to be discriminated from all other suspicious materials as well as other BFs, and the method should be applicable even to aged BF samples. Herein, we describe an innovative modeling method for discriminating the ATR FT-IR spectra of various BFs, including peripheral blood, saliva, semen, urine and sweat, to meet the practical demands described above. Spectra from unexpected non-BF samples were efficiently excluded as outliers by adopting the Q-statistics technique. The robustness of the models against aged BFs was significantly improved by using the discrimination scheme of a dichotomous classification tree with hierarchical clustering. The present study advances the use of vibrational spectroscopy and a chemometric strategy for forensic BF identification.
NASA Astrophysics Data System (ADS)
Smith, Katharine A.; Schlag, Zachary; North, Elizabeth W.
2018-07-01
Coupled three-dimensional circulation and biogeochemical models predict changes in water properties that can be used to define fish habitat, including physiologically important parameters such as temperature, salinity, and dissolved oxygen. However, methods for calculating the volume of habitat defined by the intersection of multiple water properties are not well established for coupled three-dimensional models. The objectives of this research were to examine multiple methods for calculating habitat volume from three-dimensional model predictions, select the most robust approach, and provide an example application of the technique. Three methods were assessed: the "Step," "Ruled Surface", and "Pentahedron" methods, the latter of which was developed as part of this research. Results indicate that the analytical Pentahedron method is exact, computationally efficient, and preserves continuity in water properties between adjacent grid cells. As an example application, the Pentahedron method was implemented within the Habitat Volume Model (HabVol) using output from a circulation model with an Arakawa C-grid and physiological tolerances of juvenile striped bass (Morone saxatilis). This application demonstrates that the analytical Pentahedron method can be successfully applied to calculate habitat volume using output from coupled three-dimensional circulation and biogeochemical models, and it indicates that the Pentahedron method has wide application to aquatic and marine systems for which these models exist and physiological tolerances of organisms are known.
Frank, Nancy; Bessaire, Thomas; Tarres, Adrienne; Goyon, Alexandre; Delatour, Thierry
2017-11-01
The increasing number of food frauds using exogenous nitrogen-rich adulterants to artificially raise the protein content for economically motivated adulteration has demonstrated the need for a robust analytical methodology. This method should be applicable for quality control in operations covering a wide range of analyte concentrations to be able to analyse high levels as usually found in adulteration, as well as low levels due to contamination. The paper describes a LC-MS/MS method covering 14 nitrogen-rich adulterants using a simple and fast sample preparation based on dilution and clean-up by dispersive SPE. Quantification is carried out by isotopic dilution reaching LOQs of 0.05-0.20 mg/kg in a broad range of food matrices (infant formula, liquid milk, dairy ingredient, high protein meal, cereal, infant cereal, and meat/fish powder). Validation of seven commodity groups was performed according to SANCO 12571/2013, giving satisfactory results demonstrating the method's fitness for purpose at the validated range at contamination level. Method ruggedness was further assessed by transferring the developed method into another laboratory devoted to routine testing for quality control. Next to the method description, emphasis is placed on challenges and problems appearing during method development as well as validation. They are discussed in detail and solutions are provided.
Khan, Jenna; Lieberman, Joshua A; Lockwood, Christina M
2017-05-01
microRNAs (miRNAs) hold promise as biomarkers for a variety of disease processes and for determining cell differentiation. These short RNA species are robust, survive harsh treatment and storage conditions and may be extracted from blood and tissue. Pre-analytical variables are critical confounders in the analysis of miRNAs: we elucidate these and identify best practices for minimizing sample variation in blood and tissue specimens. Pre-analytical variables addressed include patient-intrinsic variation, time and temperature from sample collection to storage or processing, processing methods, contamination by cells and blood components, RNA extraction method, normalization, and storage time/conditions. For circulating miRNAs, hemolysis and blood cell contamination significantly affect profiles; samples should be processed within 2 h of collection; ethylene diamine tetraacetic acid (EDTA) is preferred while heparin should be avoided; samples should be "double spun" or filtered; room temperature or 4 °C storage for up to 24 h is preferred; miRNAs are stable for at least 1 year at -20 °C or -80 °C. For tissue-based analysis, warm ischemic time should be <1 h; cold ischemic time (4 °C) <24 h; common fixative used for all specimens; formalin fix up to 72 h prior to processing; enrich for cells of interest; validate candidate biomarkers with in situ visualization. Most importantly, all specimen types should have standard and common workflows with careful documentation of relevant pre-analytical variables.
The Water-Energy-Food Nexus: Advancing Innovative, Policy-Relevant Methods
NASA Astrophysics Data System (ADS)
Crootof, A.; Albrecht, T.; Scott, C. A.
2017-12-01
The water-energy-food (WEF) nexus is rapidly expanding in scholarly literature and policy settings as a novel way to address complex Anthropocene challenges. The nexus approach aims to identify tradeoffs and synergies of water, energy, and food systems, internalize social and environmental impacts, and guide development of cross-sectoral policies. However, a primary limitation of the nexus approach is the absence - or gaps and inconsistent use - of adequate methods to advance an innovative and policy-relevant nexus approach. This paper presents an analytical framework to identify robust nexus methods that align with nexus thinking and highlights innovative nexus methods at the frontier. The current state of nexus methods was assessed with a systematic review of 245 journal articles and book chapters. This review revealed (a) use of specific and reproducible methods for nexus assessment is uncommon - less than one-third of the reviewed studies present explicit methods; (b) nexus methods frequently fall short of capturing interactions among water, energy, and food - the very concept they purport to address; (c) assessments strongly favor quantitative approaches - 70% use primarily quantitative tools; (d) use of social science methods is limited (26%); and (e) many nexus methods are confined to disciplinary silos - only about one-quarter combine methods from diverse disciplines and less than one-fifth utilize both quantitative and qualitative approaches. Despite some pitfalls of current nexus methods, there are a host of studies that offer innovative approaches to help quantify nexus linkages and interactions among sectors, conceptualize dynamic feedbacks, and support mixed method approaches to better understand WEF systems. Applying our analytical framework to all 245 studies, we identify, and analyze herein, seventeen studies that implement innovative multi-method and cross-scalar tools to demonstrate promising advances toward improved nexus assessment. This paper finds that, to make the WEF nexus effective as a policy-relevant analytical tool, methods are needed that incorporate social and political dimensions of water, energy, and food; utilize multiple and interdisciplinary approaches; and engage stakeholders and policy-makers.
An efficient and robust method for predicting helicopter rotor high-speed impulsive noise
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1996-01-01
A new formulation for the Ffowcs Williams-Hawkings quadrupole source, which is valid for a far-field in-plane observer, is presented. The far-field approximation is new and unique in that no further approximation of the quadrupole source strength is made and integrands with r(exp -2) and r(exp -3) dependence are retained. This paper focuses on the development of a retarded-time formulation in which time derivatives are analytically taken inside the integrals to avoid unnecessary computational work when the observer moves with the rotor. The new quadrupole formulation is similar to Farassat's thickness and loading formulation 1A. Quadrupole noise prediction is carried out in two parts: a preprocessing stage in which the previously computed flow field is integrated in the direction normal to the rotor disk, and a noise computation stage in which quadrupole surface integrals are evaluated for a particular observer position. Preliminary predictions for hover and forward flight agree well with experimental data. The method is robust and requires computer resources comparable to thickness and loading noise prediction.
Plane Smoothers for Multiblock Grids: Computational Aspects
NASA Technical Reports Server (NTRS)
Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane
1999-01-01
Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.
NASA Astrophysics Data System (ADS)
Bai, Wen; Dai, Junwu; Zhou, Huimeng; Yang, Yongqiang; Ning, Xiaoqing
2017-10-01
Porcelain electrical equipment (PEE), such as current transformers, is critical to power supply systems, but its seismic performance during past earthquakes has not been satisfactory. This paper studies the seismic performance of two typical types of PEE and proposes a damping method for PEE based on multiple tuned mass dampers (MTMD). An MTMD damping device involving three mass units, named a triple tuned mass damper (TTMD), is designed and manufactured. Through shake table tests and finite element analysis, the dynamic characteristics of the PEE are studied and the effectiveness of the MTMD damping method is verified. The adverse influence of MTMD redundant mass to damping efficiency is studied and relevant equations are derived. MTMD robustness is verified through adjusting TTMD control frequencies. The damping effectiveness of TTMD, when the peak ground acceleration far exceeds the design value, is studied. Both shake table tests and finite element analysis indicate that MTMD is effective and robust in attenuating PEE seismic responses. TTMD remains effective when the PGA far exceeds the design value and when control deviations are considered.
Abd El-Hay, Soad S; Hashem, Hisham; Gouda, Ayman A
2016-03-01
A novel, simple and robust high-performance liquid chromatography (HPLC) method was developed and validated for simultaneous determination of xipamide (XIP), triamterene (TRI) and hydrochlorothiazide (HCT) in their bulk powders and dosage forms. Chromatographic separation was carried out in less than two minutes. The separation was performed on a RP C-18 stationary phase with an isocratic elution system consisting of 0.03 mol L(-1) orthophosphoric acid (pH 2.3) and acetonitrile (ACN) as the mobile phase in the ratio of 50:50, at 2.0 mL min(-1) flow rate at room temperature. Detection was performed at 220 nm. Validation was performed concerning system suitability, limits of detection and quantitation, accuracy, precision, linearity and robustness. Calibration curves were rectilinear over the range of 0.195-100 μg mL(-1) for all the drugs studied. Recovery values were 99.9, 99.6 and 99.0 % for XIP, TRI and HCT, respectively. The method was applied to simultaneous determination of the studied analytes in their pharmaceutical dosage forms.
Evaluation of two methods to determine glyphosate and AMPA in soils of Argentina
NASA Astrophysics Data System (ADS)
De Geronimo, Eduardo; Lorenzon, Claudio; Iwasita, Barbara; Faggioli, Valeria; Aparicio, Virginia; Costa, Jose Luis
2017-04-01
Argentine agricultural production is fundamentally based on a technological package combining no-tillage and the dependence of glyphosate applications to control weeds in transgenic crops (soybean, maize and cotton). Therefore, glyphosate is the most employed herbicide in the country, where 180 to 200 million liters are applied every year. Due to its widespread use, it is important to assess its impact on the environment and, therefore, reliable analytical methods are mandatory. Glyphosate molecule exhibits unique physical and chemical characteristics which difficult its quantification, especially in soils with high organic matter content, such as the central eastern Argentine soils, where strong interferences are normally observed. The objective of this work was to compare two methods for extraction and quantification of glyphosate and AMPA in samples of 8 representative soils of Argentina. The first analytical method (method 1) was based on the use of phosphate buffer as extracting solution and dichloromethane to minimize matrix organic content. In the second method (method 2), potassium hydroxide was used to extract the analytes followed by a clean-up step using solid phase extraction (SPE) to minimize strong interferences. Sensitivity, recoveries, matrix effects and robustness were evaluated. Both methodologies involved a derivatization with 9-fluorenyl-methyl-chloroformate (FMOC) in borate buffer and detection based on ultra-high-pressure liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS). Recoveries obtained from soil samples spiked at 0.1 and 1 mg kg-1 and were satisfactory in both methods (70% - 120%). However, there was a remarkable difference regarding the matrix effect, being the SPE clean-up step (method 2) insufficient to remove the interferences. Whereas the dilution and the clean-up with dichloromethane (method 1) were more effective minimizing the ionic suppression. Moreover, method 1 had fewer steps in the protocol of sample processing than method 2. This can be highly valuable in the routine lab work due to the reduction of potential undesired errors such as the loss of analyte or sample contamination. In addition, the substitution of SPE by another alternative involved a considerable reduction of analytical costs in method 1. We conclude that method 1 seemed to be simpler and cheaper than method 2, as well as reliable to quantify glyphosate in Argentinean soils. We hope that this experience can be useful to simplify the protocols of glyphosate quantification and contribute to the understanding of the fate of this herbicide in the environment.
Topologically preserving straightening of spinal cord MRI.
De Leener, Benjamin; Mangeat, Gabriel; Dupont, Sara; Martin, Allan R; Callot, Virginie; Stikov, Nikola; Fehlings, Michael G; Cohen-Adad, Julien
2017-10-01
To propose a robust and accurate method for straightening magnetic resonance (MR) images of the spinal cord, based on spinal cord segmentation, that preserves spinal cord topology and that works for any MRI contrast, in a context of spinal cord template-based analysis. The spinal cord curvature was computed using an iterative Non-Uniform Rational B-Spline (NURBS) approximation. Forward and inverse deformation fields for straightening were computed by solving analytically the straightening equations for each image voxel. Computational speed-up was accomplished by solving all voxel equation systems as one single system. Straightening accuracy (mean and maximum distance from straight line), computational time, and robustness to spinal cord length was evaluated using the proposed and the standard straightening method (label-based spline deformation) on 3T T 2 - and T 1 -weighted images from 57 healthy subjects and 33 patients with spinal cord compression due to degenerative cervical myelopathy (DCM). The proposed algorithm was more accurate, more robust, and faster than the standard method (mean distance = 0.80 vs. 0.83 mm, maximum distance = 1.49 vs. 1.78 mm, time = 71 vs. 174 sec for the healthy population and mean distance = 0.65 vs. 0.68 mm, maximum distance = 1.28 vs. 1.55 mm, time = 32 vs. 60 sec for the DCM population). A novel image straightening method that enables template-based analysis of quantitative spinal cord MRI data is introduced. This algorithm works for any MRI contrast and was validated on healthy and patient populations. The presented method is implemented in the Spinal Cord Toolbox, an open-source software for processing spinal cord MRI data. 1 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2017;46:1209-1219. © 2017 International Society for Magnetic Resonance in Medicine.
On adaptive robustness approach to Anti-Jam signal processing
NASA Astrophysics Data System (ADS)
Poberezhskiy, Y. S.; Poberezhskiy, G. Y.
An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.
Islas, Gabriela; Hernandez, Prisciliano
2017-01-01
To achieve analytical success, it is necessary to develop thorough clean-up procedures to extract analytes from the matrix. Dispersive solid phase extraction (DSPE) has been used as a pretreatment technique for the analysis of several compounds. This technique is based on the dispersion of a solid sorbent in liquid samples in the extraction isolation and clean-up of different analytes from complex matrices. DSPE has found a wide range of applications in several fields, and it is considered to be a selective, robust, and versatile technique. The applications of dispersive techniques in the analysis of veterinary drugs in different matrices involve magnetic sorbents, molecularly imprinted polymers, carbon-based nanomaterials, and the Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method. Techniques based on DSPE permit minimization of additional steps such as precipitation, centrifugation, and filtration, which decreases the manipulation of the sample. In this review, we describe the main procedures used for synthesis, characterization, and application of this pretreatment technique and how it has been applied to food analysis. PMID:29181027
Predictive functional control for active queue management in congested TCP/IP networks.
Bigdeli, N; Haeri, M
2009-01-01
Predictive functional control (PFC) as a new active queue management (AQM) method in dynamic TCP networks supporting explicit congestion notification (ECN) is proposed. The ability of the controller in handling system delay along with its simplicity and low computational load makes PFC a privileged AQM method in the high speed networks. Besides, considering the disturbance term (which represents model/process mismatches, external disturbances, and existing noise) in the control formulation adds some level of robustness into the PFC-AQM controller. This is an important and desired property in the control of dynamically-varying computer networks. In this paper, the controller is designed based on a small signal linearized fluid-flow model of the TCP/AQM networks. Then, closed-loop transfer function representation of the system is derived to analyze the robustness with respect to the network and controller parameters. The analytical as well as the packet-level ns-2 simulation results show the out-performance of the developed controller for both queue regulation and resource utilization. Fast response, low queue fluctuations (and consequently low delay jitter), high link utilization, good disturbance rejection, scalability, and low packet marking probability are other features of the developed method with respect to other well-known AQM methods such as RED, PI, and REM which are also simulated for comparison.
Overall uncertainty measurement for near infrared analysis of cryptotanshinone in tanshinone extract
NASA Astrophysics Data System (ADS)
Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang
2017-01-01
This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I × J × K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 27-4 Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-06-27
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-01-01
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946
Li, Boyan; Ryan, Paul W; Shanahan, Michael; Leister, Kirk J; Ryder, Alan G
2011-11-01
The application of fluorescence excitation-emission matrix (EEM) spectroscopy to the quantitative analysis of complex, aqueous solutions of cell culture media components was investigated. These components, yeastolate, phytone, recombinant human insulin, eRDF basal medium, and four different chemically defined (CD) media, are used for the formulation of basal and feed media employed in the production of recombinant proteins using a Chinese Hamster Ovary (CHO) cell based process. The comprehensive analysis (either identification or quality assessment) of these materials using chromatographic methods is time consuming and expensive and is not suitable for high-throughput quality control. The use of EEM in conjunction with multiway chemometric methods provided a rapid, nondestructive analytical method suitable for the screening of large numbers of samples. Here we used multiway robust principal component analysis (MROBPCA) in conjunction with n-way partial least squares discriminant analysis (NPLS-DA) to develop a robust routine for both the identification and quality evaluation of these important cell culture materials. These methods are applicable to a wide range of complex mixtures because they do not rely on any predetermined compositional or property information, thus making them potentially very useful for sample handling, tracking, and quality assessment in biopharmaceutical industries.
A Robust Analysis Method For Δ13c Signal Of Bulk Organic Matter In Speleothems
NASA Astrophysics Data System (ADS)
Bian, F.; Blyth, A. J.; Smith, C.; Baker, A.
2017-12-01
Speleothems preserve organic matter that is derived from both the surface soil and cave environments. This organic matter can be used to understand paleoclimate and paleoenvironments. However, a stable and quick micro-analysis method to measure the δ13C signals from speleothem organic matter separate from the total δ13C remains absent. And speleothem organic geochemistry is still relatively unexplored compared to inorganic geochemistry. In this research, for the organic matter analysis, bulk homogeneous power samples were obtained from one large stalagmite. These were dissolved by phosphoric acid to produce the aqueous solution. Then, the processed solution was degassed through a rotational vacuum concentrator. A liquid chromatograph was coupled to IRMS to control the oxidization and the measurement of analytes. This method is demonstrated to be robust for the analysis of speleothem d13C organic matter analysis under different preparation and instrumental settings, with the low standard deviation ( 0.2‰), and low sample consumption (<25 mg). Considering the complexity of cave environments, this method will be useful in further investigations the δ13C of entrapped organic matter and environmental controls in other climatic and ecological contexts, including the determination of whether vegetation or soil microbial activity is the dominant control on speleothem d13C of organic matter.
Khurana, Rajneet Kaur; Rao, Satish; Beg, Sarwar; Katare, O.P.; Singh, Bhupinder
2016-01-01
The present work aims at the systematic development of a simple, rapid and highly sensitive densitometry-based thin-layer chromatographic method for the quantification of mangiferin in bioanalytical samples. Initially, the quality target method profile was defined and critical analytical attributes (CAAs) earmarked, namely, retardation factor (Rf), peak height, capacity factor, theoretical plates and separation number. Face-centered cubic design was selected for optimization of volume loaded and plate dimensions as the critical method parameters selected from screening studies employing D-optimal and Plackett–Burman design studies, followed by evaluating their effect on the CAAs. The mobile phase containing a mixture of ethyl acetate : acetic acid : formic acid : water in a 7 : 1 : 1 : 1 (v/v/v/v) ratio was finally selected as the optimized solvent for apt chromatographic separation of mangiferin at 262 nm with Rf 0.68 ± 0.02 and all other parameters within the acceptance limits. Method validation studies revealed high linearity in the concentration range of 50–800 ng/band for mangiferin. The developed method showed high accuracy, precision, ruggedness, robustness, specificity, sensitivity, selectivity and recovery. In a nutshell, the bioanalytical method for analysis of mangiferin in plasma revealed the presence of well-resolved peaks and high recovery of mangiferin. PMID:26912808
Esterhuizen-Londt, Maranda; Schwartz, Katrin; Balsano, Evelyn; Kühn, Sandra; Pflugmacher, Stephan
2016-06-01
Acetaminophen is a pharmaceutical, frequently found in surface water as a contaminant. Bioremediation, in particular, mycoremediation of acetaminophen is a method to remove this compound from waters. Owing to the lack of quantitative analytical method for acetaminophen in aquatic organisms, the present study aimed to develop a method for the determination of acetaminophen using LC-MS/MS in the aquatic fungus Mucor hiemalis. The method was then applied to evaluate the uptake of acetaminophen by M. hiemalis, cultured in pellet morphology. The method was robust, sensitive and reproducible with a lower limit of quantification of 5 pg acetaminophen on column. It was found that M. hiemalis internalize the pharmaceutical, and bioaccumulate it with time. Therefore, M. hiemalis was deemed a suitable candidate for further studies to elucidate its pharmaceutical tolerance and the longevity in mycoremediation applications. Copyright © 2016 Elsevier Inc. All rights reserved.
Locatelli, Marcello; Cifelli, Roberta; Di Legge, Cristina; Barbacane, Renato Carmine; Costa, Nicola; Fresta, Massimo; Celia, Christian; Capolupo, Carlo; Di Marzio, Luisa
2015-04-03
This paper reports the validation of a quantitative high performance liquid chromatography-photodiode array (HPLC-PDA) method for the simultaneous analysis, in mouse plasma, of eperisone hydrochloride and paracetamol by protein precipitation using zinc sulphate-methanol-acetonitrile. The analytes were resolved on a Gemini C18 column (4.6 mm × 250 mm; 5 μm particle size) using a gradient elution mode with a run time of 15 min, comprising re-equilibration, at 60°C (± 1°C). The method was validated over the concentration range from 0.5 to 25 μg/mL for eperisone hydrochloride and paracetamol, in mouse plasma. Ciprofloxacin was used as Internal Standard. Results from assay validations show that the method is selective, sensitive and robust. The limit of quantification of the method was 0.5 μg/mL for eperisone hydrochloride and paracetamol, and matrix-matched standard curves showed a good linearity, up to 25 μg/mL with correlation coefficients (r(2))≥ 0.9891. In the entire analytical range the intra and inter-day precision (RSD%) values were ≤ 1.15% and ≤ 1.46% for eperisone hydrochloride, and ≤ 0.35% and ≤ 1.65% for paracetamol. For both analytes the intra and inter-day trueness (bias%) values ranged, respectively, from -5.33% to 4.00% and from -11.4% to -4.00%. The method was successfully tested in pharmacokinetic studies after oral administration in mouse. Furthermore, the application of this method results in a significant reduction in terms of animal number, dosage, and improvement in speed, rate of analysis, and quality of pharmacokinetic parameters related to serial blood sampling. Copyright © 2015 Elsevier B.V. All rights reserved.
Piehl, Susanne; Heberer, Thomas; Balizs, Gabor; Scanlan, Thomas S; Köhrle, Josef
2008-10-01
Thyronines (THs) and thyronamines (TAMs) are two groups of endogenous iodine-containing signaling molecules whose representatives differ from each other only regarding the number and/or the position of the iodine atoms. Both groups of compounds are substrates of three deiodinase isozymes, which catalyze the sequential reductive removal of iodine from the respective precursor molecule. In this study, a novel analytical method applying liquid chromatography/tandem mass spectrometry (LC-MS/MS) was developed. This method permitted the unequivocal, simultaneous identification and quantification of all THs and TAMs in the same biological sample. Furthermore, a liquid-liquid extraction procedure permitting the concurrent isolation of all THs and TAMs from biological matrices, namely deiodinase (Dio) reaction mixtures, was established. Method validation experiments with extracted TH and TAM analytes demonstrated that the method was selective, devoid of matrix effects, sensitive, linear over a wide range of analyte concentrations and robust in terms of reproducible recoveries, process efficiencies as well as intra-assay and inter-assay stability parameters. The method was applied to study the deiodination reactions of iodinated THs catalyzed by the three deiodinase isozymes. With the HPLC protocol developed herein, sufficient chromatographic separation of all constitutional TH and TAM isomers was achieved. Accordingly, the position of each iodine atom removed from a TH substrate in a Dio-catalyzed reaction was backtracked unequivocally. While several established deiodination reactions were verified, two as yet unknown reactions, namely the phenolic ring deiodination of 3',5'-diiodothyronine (3',5'-T2) by Dio2 and the tyrosyl ring deiodination of 3-monoiodothyronine (3-T1) by Dio3, were newly identified.
Chen, Yi-Ting; Chen, Hsiao-Wei; Wu, Chun-Feng; Chu, Lichieh Julie; Chiang, Wei-Fang; Wu, Chih-Ching; Yu, Jau-Song; Tsai, Cheng-Han; Liang, Kung-Hao; Chang, Yu-Sun; Wu, Maureen; Ou Yang, Wei-Ting
2017-05-01
Multiple (selected) reaction monitoring (MRM/SRM) of peptides is a growing technology for target protein quantification because it is more robust, precise, accurate, high-throughput, and multiplex-capable than antibody-based techniques. The technique has been applied clinically to the large-scale quantification of multiple target proteins in different types of fluids. However, previous MRM-based studies have placed less focus on sample-preparation workflow and analytical performance in the precise quantification of proteins in saliva, a noninvasively sampled body fluid. In this study, we evaluated the analytical performance of a simple and robust multiple reaction monitoring (MRM)-based targeted proteomics approach incorporating liquid chromatography with mass spectrometry detection (LC-MRM/MS). This platform was used to quantitatively assess the biomarker potential of a group of 56 salivary proteins that have previously been associated with human cancers. To further enhance the development of this technology for assay of salivary samples, we optimized the workflow for salivary protein digestion and evaluated quantification performance, robustness and technical limitations in analyzing clinical samples. Using a clinically well-characterized cohort of two independent clinical sample sets (total n = 119), we quantitatively characterized these protein biomarker candidates in saliva specimens from controls and oral squamous cell carcinoma (OSCC) patients. The results clearly showed a significant elevation of most targeted proteins in saliva samples from OSCC patients compared with controls. Overall, this platform was capable of assaying the most highly multiplexed panel of salivary protein biomarkers, highlighting the clinical utility of MRM in oral cancer biomarker research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
NASA Astrophysics Data System (ADS)
Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi
2017-07-01
The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.
Biosimilars: The US Regulatory Framework.
Christl, Leah A; Woodcock, Janet; Kozlowski, Steven
2017-01-14
With the passage of the Biologics Price Competition and Innovation Act of 2009, the US Food and Drug Administration established an abbreviated pathway for developing and licensing biosimilar and interchangeable biological products. The regulatory framework and the technical requirements of the US biosimilars program involve a stepwise approach that relies heavily on analytical methods to demonstrate through a "totality of the evidence" that a proposed product is biosimilar to its reference product. By integrating analytical, pharmacological, and clinical data, each of which has limitations, a high level of confidence can be reached regarding clinical performance. Although questions and concerns about the biosimilars pathway remain and may slow uptake, a robust scientific program has been put in place. With three biosimilars already licensed and numerous development programs under way, clinicians can expect to see many new biosimilars come onto the US market in the coming decade. [Note added in proof: Since the writing of this article, a fourth biosimilar has been approved.].
Exact PDF equations and closure approximations for advective-reactive transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venturi, D.; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.
2013-06-01
Mathematical models of advection–reaction phenomena rely on advective flow velocity and (bio) chemical reaction rates that are notoriously random. By using functional integral methods, we derive exact evolution equations for the probability density function (PDF) of the state variables of the advection–reaction system in the presence of random transport velocity and random reaction rates with rather arbitrary distributions. These PDF equations are solved analytically for transport with deterministic flow velocity and a linear reaction rate represented mathematically by a heterog eneous and strongly-correlated random field. Our analytical solution is then used to investigate the accuracy and robustness of the recentlymore » proposed large-eddy diffusivity (LED) closure approximation [1]. We find that the solution to the LED-based PDF equation, which is exact for uncorrelated reaction rates, is accurate even in the presence of strong correlations and it provides an upper bound of predictive uncertainty.« less
Robust stability of fractional order polynomials with complicated uncertainty structure
Şenol, Bilal; Pekař, Libor
2017-01-01
The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173
Zeleny, Reinhard; Harbeck, Stefan; Schimmel, Heinz
2009-01-09
A liquid chromatography-electrospray ionisation tandem mass spectrometry method for the simultaneous detection and quantitation of 5-nitroimidazole veterinary drugs in lyophilised pork meat, the chosen format of a candidate certified reference material, has been developed and validated. Six analytes have been included in the scope of validation, i.e. dimetridazole (DMZ), metronidazole (MNZ), ronidazole (RNZ), hydroxymetronidazole (MNZOH), hydroxyipronidazole (IPZOH), and 2-hydroxymethyl-1-methyl-5-nitroimidazole (HMMNI). The analytes were extracted from the sample with ethyl acetate, chromatographically separated on a C(18) column, and finally identified and quantified by tandem mass spectrometry in the multiple reaction monitoring mode (MRM) using matrix-matched calibration and (2)H(3)-labelled analogues of the analytes (except for MNZOH, where [(2)H(3)]MNZ was used). The method was validated in accordance with Commission Decision 2002/657/EC, by determining selectivity, linearity, matrix effect, apparent recovery, repeatability and intermediate precision, decision limits and detection capabilities, robustness of sample preparation method, and stability of extracts. Recovery at 1 microg/kg level was at 100% (estimates in the range of 101-107%) for all analytes, repeatabilities and intermediate precisions at this level were in the range of 4-12% and 2-9%, respectively. Linearity of calibration curves in the working range 0.5-10 microg/kg was confirmed, with r values typically >0.99. Decision limits (CCalpha) and detection capabilities (CCbeta) according to ISO 11843-2 (calibration curve approach) were 0.29-0.44 and 0.36-0.54 microg/kg, respectively. The method reliably identifies and quantifies the selected nitroimidazoles in the reconstituted pork meat in the low and sub-microg/kg range and will be applied in an interlaboratory comparison for determining the mass fraction of the selected nitroimidazoles in the candidate reference material currently developed at IRMM.
A Requirements-Driven Optimization Method for Acoustic Treatment Design
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
2016-01-01
Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.
Zhang, Fen-Fen; Jiang, Meng-Hong; Sun, Lin-Lin; Zheng, Feng; Dong, Lei; Shah, Vishva; Shen, Wen-Bin; Ding, Ya
2015-01-07
To expand the application scope of nuclear magnetic resonance (NMR) technology in quantitative analysis of pharmaceutical ingredients, (19)F nuclear magnetic resonance ((19)F-NMR) spectroscopy has been employed as a simple, rapid, and reproducible approach for the detection of a fluorine-containing model drug, sitagliptin phosphate monohydrate (STG). ciprofloxacin (Cipro) has been used as the internal standard (IS). Influential factors, including the relaxation delay time (d1) and pulse angle, impacting the accuracy and precision of spectral data are systematically optimized. Method validation has been carried out in terms of precision and intermediate precision, linearity, limit of detection (LOD) and limit of quantification (LOQ), robustness, and stability. To validate the reliability and feasibility of the (19)F-NMR technology in quantitative analysis of pharmaceutical analytes, the assay result has been compared with that of (1)H-NMR. The statistical F-test and student t-test at 95% confidence level indicate that there is no significant difference between these two methods. Due to the advantages of (19)F-NMR, such as higher resolution and suitability for biological samples, it can be used as a universal technology for the quantitative analysis of other fluorine-containing pharmaceuticals and analytes.
Fu, Li; Merabia, Samy; Joly, Laurent
2018-04-19
Following our recent theoretical prediction of the giant thermo-osmotic response of the water-graphene interface, we explore the practical implementation of waste heat harvesting with carbon-based membranes, focusing on model membranes of carbon nanotubes (CNT). To that aim, we combine molecular dynamics simulations and an analytical model considering the details of hydrodynamics in the membrane and at the tube entrances. The analytical model and the simulation results match quantitatively, highlighting the need to take into account both thermodynamics and hydrodynamics to predict thermo-osmotic flows through membranes. We show that, despite viscous entrance effects and a thermal short-circuit mechanism, CNT membranes can generate very fast thermo-osmotic flows, which can overcome the osmotic pressure of seawater. We then show that in small tubes confinement has a complex effect on the flow and can even reverse the flow direction. Beyond CNT membranes, our analytical model can guide the search for other membranes to generate fast and robust thermo-osmotic flows.
Barth, Aline Bergesch; de Oliveira, Gabriela Bolfe; Malesuik, Marcelo Donadel; Paim, Clésio Soldatelli; Volpato, Nadia Maria
2011-08-01
A stability-indicating liquid chromatography method for the determination of the antifungal agent butenafine hydrochloride (BTF) in a cream was developed and validated using the Plackett-Burman experimental design for robustness evaluation. Also, the drug photodegradation kinetics was determined. The analytical column was operated with acetonitrile, methanol and a solution of triethylamine 0.3% adjusted to pH 4.0 (6:3:1) at a flow rate of 1 mL/min and detection at 283 nm. BTF extraction from the cream was done with n-butyl alcohol and methanol in ultrasonic bath. The performed degradation conditions were: acid and basic media with HCl 1M and NaOH 1M, respectively, oxidation with H(2)O(2) 10%, and the exposure to UV-C light. No interference in the BTF elution was verified. Linearity was assessed (r(2) = 0.9999) and ANOVA showed non-significative linearity deviation (p > 0.05). Adequate results were obtained for repeatability, intra-day precision, and accuracy. Critical factors were selected to examine the method robustness with the two-level Plackett-Burman experimental design and no significant factors were detected (p > 0.05). The BTF photodegradation kinetics was determined for the standard and for the cream, both in methanolic solution, under UV light at 254 nm. The degradation process can be described by first-order kinetics in both cases.
Pacheco, Bruno D; Valério, Jaqueline; Angnes, Lúcio; Pedrotti, Jairo J
2011-06-24
A fast and robust analytical method for amperometric determination of hydrogen peroxide (H(2)O(2)) based on batch injection analysis (BIA) on an array of gold microelectrodes modified with platinum is proposed. The gold microelectrode array (n=14) was obtained from electronic chips developed for surface mounted device technology (SMD), whose size offers advantages to adapt them in batch cells. The effect of the dispensing rate, volume injected, distance between the platinum microelectrodes and the pipette tip, as well as the volume of solution in the cell on the analytical response were evaluated. The method allows the H(2)O(2) amperometric determination in the concentration range from 0.8 μmolL(-1) to 100 μmolL(-1). The analytical frequency can attain 300 determinations per hour and the detection limit was estimated in 0.34 μmolL(-1) (3σ). The anodic current peaks obtained after a series of 23 successive injections of 50 μL of 25 μmolL(-1) H(2)O(2) showed an RSD<0.9%. To ensure the good selectivity to detect H(2)O(2), its determination was performed in a differential mode, with selective destruction of the H(2)O(2) with catalase in 10 mmolL(-1) phosphate buffer solution. Practical application of the analytical procedure involved H(2)O(2) determination in rainwater of São Paulo City. A comparison of the results obtained by the proposed amperometric method with another one which combines flow injection analysis (FIA) with spectrophotometric detection showed good agreement. Copyright © 2011 Elsevier B.V. All rights reserved.
The evolution of analytical chemistry methods in foodomics.
Gallo, Monica; Ferranti, Pasquale
2016-01-08
The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science. Copyright © 2015 Elsevier B.V. All rights reserved.
A fully Sinc-Galerkin method for Euler-Bernoulli beam models
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.; Lund, J.
1990-01-01
A fully Sinc-Galerkin method in both space and time is presented for fourth-order time-dependent partial differential equations with fixed and cantilever boundary conditions. The Sinc discretizations for the second-order temporal problem and the fourth-order spatial problems are presented. Alternate formulations for variable parameter fourth-order problems are given which prove to be especially useful when applying the forward techniques to parameter recovery problems. The discrete system which corresponds to the time-dependent partial differential equations of interest are then formulated. Computational issues are discussed and a robust and efficient algorithm for solving the resulting matrix system is outlined. Numerical results which highlight the method are given for problems with both analytic and singular solutions as well as fixed and cantilever boundary conditions.
Da Silva, Laeticia; Collino, Sebastiano; Cominetti, Ornella; Martin, Francois-Pierre; Montoliu, Ivan; Moreno, Sergio Oller; Corthesy, John; Kaput, Jim; Kussmann, Martin; Monteiro, Jacqueline Pontes; Guiraud, Seu Ping
2016-09-01
There is increasing interest in the profiling and quantitation of methionine pathway metabolites for health management research. Currently, several analytical approaches are required to cover metabolites and co-factors. We report the development and the validation of a method for the simultaneous detection and quantitation of 13 metabolites in red blood cells. The method, validated in a cohort of healthy human volunteers, shows a high level of accuracy and reproducibility. This high-throughput protocol provides a robust coverage of central metabolites and co-factors in one single analysis and in a high-throughput fashion. In large-scale clinical settings, the use of such an approach will significantly advance the field of nutritional research in health and disease.
Analytical challenges for conducting rapid metabolism characterization for QIVIVE.
Tolonen, Ari; Pelkonen, Olavi
2015-06-05
For quantitative in vitro-in vivo extrapolation (QIVIVE) of metabolism for the purposes of toxicokinetics prediction, a precise and robust analytical technique for identifying and measuring a chemical and its metabolites is an absolute prerequisite. Currently, high-resolution mass spectrometry (HR-MS) is a tool of choice for a majority of organic relatively lipophilic molecules, linked with a LC separation tool and simultaneous UV-detection. However, additional techniques such as gas chromatography, radiometric measurements and NMR, are required to cover the whole spectrum of chemical structures. To accumulate enough reliable and robust data for the validation of QIVIVE, there are some partially opposing needs: Detailed delineation of the in vitro test system to produce a reliable toxicokinetic measure for a studied chemical, and a throughput capacity of the in vitro set-up and the analytical tool as high as possible. We discuss current analytical challenges for the identification and quantification of chemicals and their metabolites, both stable and reactive, focusing especially on LC-MS techniques, but simultaneously attempting to pinpoint factors associated with sample preparation, testing conditions and strengths and weaknesses of a particular technique available for a particular task. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Improving Causal Inferences in Meta-analyses of Longitudinal Studies: Spanking as an Illustration.
Larzelere, Robert E; Gunnoe, Marjorie Lindner; Ferguson, Christopher J
2018-05-24
To evaluate and improve the validity of causal inferences from meta-analyses of longitudinal studies, two adjustments for Time-1 outcome scores and a temporally backwards test are demonstrated. Causal inferences would be supported by robust results across both adjustment methods, distinct from results run backwards. A systematic strategy for evaluating potential confounds is also introduced. The methods are illustrated by assessing the impact of spanking on subsequent externalizing problems (child age: 18 months to 11 years). Significant results indicated a small risk or a small benefit of spanking, depending on the adjustment method. These meta-analytic methods are applicable for research on alternatives to spanking and other developmental science topics. The underlying principles can also improve causal inferences in individual studies. © 2018 Society for Research in Child Development.
Recent activities within the Aeroservoelasticity Branch at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Noll, Thomas E.; Perry, Boyd, III; Gilbert, Michael G.
1989-01-01
The objective of research in aeroservoelasticity at the NASA Langley Research Center is to enhance the modeling, analysis, and multidisciplinary design methodologies for obtaining multifunction digital control systems for application to flexible flight vehicles. Recent accomplishments are discussed, and a status report on current activities within the Aeroservoelasticity Branch is presented. In the area of modeling, improvements to the Minimum-State Method of approximating unsteady aerodynamics are shown to provide precise, low-order aeroservoelastic models for design and simulation activities. Analytical methods based on Matched Filter Theory and Random Process Theory to provide efficient and direct predictions of the critical gust profile and the time-correlated gust loads for linear structural design considerations are also discussed. Two research projects leading towards improved design methodology are summarized. The first program is developing an integrated structure/control design capability based on hierarchical problem decomposition, multilevel optimization and analytical sensitivities. The second program provides procedures for obtaining low-order, robust digital control laws for aeroelastic applications. In terms of methodology validation and application the current activities associated with the Active Flexible Wing project are reviewed.
Holland, Tanja; Blessing, Daniel; Hellwig, Stephan; Sack, Markus
2013-10-01
Radio frequency impedance spectroscopy (RFIS) is a robust method for the determination of cell biomass during fermentation. RFIS allows non-invasive in-line monitoring of the passive electrical properties of cells in suspension and can distinguish between living and dead cells based on their distinct behavior in an applied radio frequency field. We used continuous in situ RFIS to monitor batch-cultivated plant suspension cell cultures in stirred-tank bioreactors and compared the in-line data to conventional off-line measurements. RFIS-based analysis was more rapid and more accurate than conventional biomass determination, and was sensitive to changes in cell viability. The higher resolution of the in-line measurement revealed subtle changes in cell growth which were not accessible using conventional methods. Thus, RFIS is well suited for correlating such changes with intracellular states and product accumulation, providing unique opportunities for employing systems biotechnology and process analytical technology approaches to increase product yield and quality. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DNA barcode-based delineation of putative species: efficient start for taxonomic workflows
Kekkonen, Mari; Hebert, Paul D N
2014-01-01
The analysis of DNA barcode sequences with varying techniques for cluster recognition provides an efficient approach for recognizing putative species (operational taxonomic units, OTUs). This approach accelerates and improves taxonomic workflows by exposing cryptic species and decreasing the risk of synonymy. This study tested the congruence of OTUs resulting from the application of three analytical methods (ABGD, BIN, GMYC) to sequence data for Australian hypertrophine moths. OTUs supported by all three approaches were viewed as robust, but 20% of the OTUs were only recognized by one or two of the methods. These OTUs were examined for three criteria to clarify their status. Monophyly and diagnostic nucleotides were both uninformative, but information on ranges was useful as sympatric sister OTUs were viewed as distinct, while allopatric OTUs were merged. This approach revealed 124 OTUs of Hypertrophinae, a more than twofold increase from the currently recognized 51 species. Because this analytical protocol is both fast and repeatable, it provides a valuable tool for establishing a basic understanding of species boundaries that can be validated with subsequent studies. PMID:24479435
Data informatics for the Detection, Characterization, and Attribution of Climate Extremes
NASA Astrophysics Data System (ADS)
Collins, W.; Wehner, M. F.; O'Brien, T. A.; Paciorek, C. J.; Krishnan, H.; Johnson, J. N.; Prabhat, M.
2015-12-01
The potential for increasing frequency and intensity of extremephenomena including downpours, heat waves, and tropical cyclonesconstitutes one of the primary risks of climate change for society andthe environment. The challenge of characterizing these risks is thatextremes represent the "tails" of distributions of atmosphericphenomena and are, by definition, highly localized and typicallyrelatively transient. Therefore very large volumes of observationaldata and projections of future climate are required to quantify theirproperties in a robust manner. Massive data analytics are required inorder to detect individual extremes, accumulate statistics on theirproperties, quantify how these statistics are changing with time, andattribute the effects of anthropogenic global warming on thesestatistics. We describe examples of the suite of techniques the climate communityis developing to address these analytical challenges. The techniquesinclude massively parallel methods for detecting and trackingatmospheric rivers and cyclones; data-intensive extensions togeneralized extreme value theory to summarize the properties ofextremes; and multi-model ensembles of hindcasts to quantify theattributable risk of anthropogenic influence on individual extremes.We conclude by highlighting examples of these methods developed by ourCASCADE (Calibrated and Systematic Characterization, Attribution, andDetection of Extremes) project.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
Validation of a common data model for active safety surveillance research
Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E
2011-01-01
Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893
NASA Astrophysics Data System (ADS)
Urban, Matthias; Möller, Robert; Fritzsche, Wolfgang
2003-02-01
DNA analytics is a growing field based on the increasing knowledge about the genome with special implications for the understanding of molecular bases for diseases. Driven by the need for cost-effective and high-throughput methods for molecular detection, DNA chips are an interesting alternative to more traditional analytical methods in this field. The standard readout principle for DNA chips is fluorescence based. Fluorescence is highly sensitive and broadly established, but shows limitations regarding quantification (due to signal and/or dye instability) and the need for sophisticated (and therefore high-cost) equipment. This article introduces a readout system for an alternative detection scheme based on electrical detection of nanoparticle-labeled DNA. If labeled DNA is present in the analyte solution, it will bind on complementary capture DNA immobilized in a microelectrode gap. A subsequent metal enhancement step leads to a deposition of conductive material on the nanoparticles, and finally an electrical contact between the electrodes. This detection scheme offers the potential for a simple (low-cost as well as robust) and highly miniaturizable method, which could be well-suited for point-of-care applications in the context of lab-on-a-chip technologies. The demonstrated apparatus allows a parallel readout of an entire array of microstructured measurement sites. The readout is combined with data-processing by an embedded personal computer, resulting in an autonomous instrument that measures and presents the results. The design and realization of such a system is described, and first measurements are presented.
Rodriguez, Estrella Sanz; Poynter, Sam; Curran, Mark; Haddad, Paul R; Shellie, Robert A; Nesterenko, Pavel N; Paull, Brett
2015-08-28
Preservation of ionic species within Antarctic ice yields a unique proxy record of the Earth's climate history. Studies have been focused until now on two proxies: the ionic components of sea salt aerosol and methanesulfonic acid. Measurement of the all of the major ionic species in ice core samples is typically carried out by ion chromatography. Former methods, whilst providing suitable detection limits, have been based upon off-column preconcentration techniques, requiring larger sample volumes, with potential for sample contamination and/or carryover. Here, a new capillary ion chromatography based analytical method has been developed for quantitative analysis of limited volume Antarctic ice core samples. The developed analytical protocol applies capillary ion chromatography (with suppressed conductivity detection) and direct on-column sample injection and focusing, thus eliminating the requirement for off-column sample preconcentration. This limits the total sample volume needed to 300μL per analysis, allowing for triplicate sample analysis with <1mL of sample. This new approach provides a reliable and robust analytical method for the simultaneous determination of organic and inorganic anions, including fluoride, methanesulfonate, chloride, sulfate and nitrate anions. Application to composite ice-core samples is demonstrated, with coupling of the capillary ion chromatograph to high resolution mass spectrometry used to confirm the presence and purity of the observed methanesulfonate peak. Copyright © 2015 Elsevier B.V. All rights reserved.
Biswas, A K; Tandon, S; Beura, C K
2016-06-01
The aim of this study was to develop a simple, specific and rapid analytical method for accurate identification of calpain and calpastatin from chicken blood and muscle samples. The method is based on liquid-liquid extraction technique followed by casein Zymography detection. The target compounds were extracted from blood and meat samples by tris buffer, and purified and separated on anion exchange chromatography. It has been observed that buffer (pH 6.7) containing 50 mM tris-base appears to be excellent extractant as activity of analytes was maximum for all samples. The concentrations of μ-, m-calpain and calpastatin detected in the extracts of blood, breast and thigh samples were 0.28-0.55, 1.91-2.05 and 1.38-1.52 Unit/g, respectively. For robustness, the analytical method was applied to determine the activity of calpains (μ and m) in eighty postmortem muscle samples. It has been observed that μ-calpain activity in breast and thigh muscles declined very rapidly at 48 h and 24 h, respectively while activity of m-calpain remained stable. Shear force values were also declined with the increase of post-mortem aging showing the presence of ample tenderness of breast and thigh muscles. Finally, it is concluded that the method standardized for the detection of calpain and calpastatin has the potential to be applied to identify post-mortem aging of chicken meat samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dissolution of aerosol particles collected from nuclear facility plutonium production process
Xu, Ning; Martinez, Alexander; Schappert, Michael Francis; ...
2015-08-14
Here, a simple, robust analytical chemistry method has been developed to dissolve plutonium containing particles in a complex matrix. The aerosol particles collected on Marple cascade impactor substrates were shown to be dissolved completely with an acid mixture of 12 M HNO 3 and 0.1 M HF. A pressurized closed vessel acid digestion technique was utilized to heat the samples at 130 °C for 16 h to facilitate the digestion. The dissolution efficiency for plutonium particles was 99 %. The resulting particle digestate solution was suitable for trace elemental analysis and isotope composition determination, as well as radiochemistry measurements.
NASA Technical Reports Server (NTRS)
Arevalo, Ricardo, Jr.; Coyle, Barry; Paulios, Demetrios; Stysley, Paul; Feng, Steve; Getty, Stephanie; Binkerhoff, William
2015-01-01
Compared to wet chemistry and pyrolysis techniques, in situ laser-based methods of chemical analysis provide an ideal way to characterize precious planetary materials without requiring extensive sample processing. In particular, laser desorption and ablation techniques allow for rapid, reproducible and robust data acquisition over a wide mass range, plus: Quantitative, spatially-resolved measurements of elemental and molecular (organic and inorganic) abundances; Low analytical blanks and limits-of-detection ( ng g-1); and, the destruction of minimal quantities of sample ( g) compared to traditional solution and/or pyrolysis analyses (mg).
On a computational model of building thermal dynamic response
NASA Astrophysics Data System (ADS)
Jarošová, Petra; Vala, Jiří
2016-07-01
Development and exploitation of advanced materials, structures and technologies in civil engineering, both for buildings with carefully controlled interior temperature and for common residential houses, together with new European and national directives and technical standards, stimulate the development of rather complex and robust, but sufficiently simple and inexpensive computational tools, supporting their design and optimization of energy consumption. This paper demonstrates the possibility of consideration of such seemingly contradictory requirements, using the simplified non-stationary thermal model of a building, motivated by the analogy with the analysis of electric circuits; certain semi-analytical forms of solutions come from the method of lines.
Liquid chromatographic determination of sennosides in Cassia angustifolia leaves.
Srivastava, Alpuna; Pandey, Richa; Verma, Ram K; Gupta, Madan M
2006-01-01
A simple liquid chromatographic method was developed for the determination of sennosides B and A in leaves of Cassia angustifolia. These compounds were extracted from leaves with a mixture of methanol-water (70 + 30, v/v) after defatting with hexane. Analyte separation and quantitation were achieved by gradient reversed-phase liquid chromatography and UV absorbance at 270 nm using a photodiode array detector. The method involves the use of an RP-18 Lichrocart reversed-phase column (5 microm, 125 x 4.0 mm id) and a binary gradient mobile-phase profile. The various other aspects of analysis, namely, peak purity, similarity, recovery, repeatability, and robustness, were validated. Average recoveries of 98.5 and 98.6%, with a coefficient of variation of 0.8 and 0.3%, were obtained by spiking sample solution with 3 different concentration solutions of standards (60, 100, and 200 microg/mL). Detection limits were 10 microg/mL for sennoside B and 35 microg/mL for sennoside A, present in the sample solution. The quantitation limits were 28 and 100 microg/mL. The analytical method was applied to a large number of senna leaf samples. The new method provides a reliable tool for rapid screening of C. angustifolia samples in large numbers, which is needed in breeding/genetic engineering and genetic mapping experiments.
Fernandes, Ana Josane Dantas; Ferreira, Magda Rhayanny Assunção; Randau, Karina Perrelli; de Souza, Tatiane Pereira; Soares, Luiz Alberto Lira
2012-01-01
The aim of this work was to evaluate the spectrophotometric methodology for determining the total flavonoid content (TFC) in herbal drug and derived products from Bauhinia monandra Kurz. Several analytical parameters from this method grounded on the complex formed between flavonoids and AlCl3 were evaluated such as herbal amount (0.25 to 1.25 g); solvent composition (ethanol 40 to 80%, v/v); as well as the reaction time and AlCl3 concentration (2 to 9%, w/v). The method was adjusted to aqueous extractives and its performance studied through precision, linearity and preliminary robustness. The results showed an important dependence of the method response from reaction time, AlCl3 concentration, sample amount, and solvent mixture. After choosing the optimized condition, the method was applied for the matrixes (herbal material and extractives), showing precision lower than 5% (for both parameters repeatability and intermediate precision), coefficient of determination higher than 0.99, and no important influence could be observed for slight variations from wavelength or AlCl3 concentration. Thus, it could be concluded that the evaluated analytical procedure was suitable to quantify the total flavonoid content in raw material and aqueous extractives from leaves of B. monandra. PMID:22701375
Talele, G. S.; Porwal, P. K.
2015-01-01
A simple, economical and robust analytical high-performance liquid chromatography-ultraviolet method was developed and validated for simultaneous chromatographic elution of two cardiovascular drugs viz. amlodipine and atorvastatin in biological fluid for the first time. Only two liquid chromatography–mass spectrometry/mass spectrometry methods are available in literature for quantitation of selected pair of analytes. The bioanalytical method was developed in rat plasma by using Thermo beta-basic C18 (100×4.6 mm, 5 μm) and mobile phase was composed of dibasic phosphate buffer (pH 3.0):acetonitrile in the ratio of 55:45 at a flow rate of 1 ml/min with ultraviolet detection monitored at 240 nm. The selected chromatographic conditions were found to effectively separate amlodipine (5.1 min) and atorvastatin (12.1 min). The parametric statistics,i.e. correlation coefficient of 0.999, was assessed for both the drugs having linearity over the tested concentration range (0.05 to 10.0 μg/ml) in rat plasma using an unweighted calibration curve. The mean recovery (%) was more than 92.8% for both the drugs using protein precipitation method. The accuracy of samples for six replicate measurements at lower limit of quantitation level was within limit. The method was validated and was successfully applied to the nonclinical pharmacokinetic study of combination tablets containing amlodipine and atorvastatin in six Sprague Dawley rats. PMID:26997703
Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R
2015-08-28
Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed preferable for X-axis positional verification, with KVX preferred for superoinferior alignment. The COM3PARE methodology was validated as feasible and useful in this pilot head and neck cancer positional verification dataset. COM3PARE represents a flexible and robust standardized analytic methodology for IGRT comparison. The implemented SAS script is included to encourage other groups to implement COM3PARE in other anatomic sites or IGRT platforms.
Punctuated evolution and robustness in morphogenesis
Grigoriev, D.; Reinitz, J.; Vakulenko, S.; Weber, A.
2014-01-01
This paper presents an analytic approach to the pattern stability and evolution problem in morphogenesis. The approach used here is based on the ideas from the gene and neural network theory. We assume that gene networks contain a number of small groups of genes (called hubs) controlling morphogenesis process. Hub genes represent an important element of gene network architecture and their existence is empirically confirmed. We show that hubs can stabilize morphogenetic pattern and accelerate the morphogenesis. The hub activity exhibits an abrupt change depending on the mutation frequency. When the mutation frequency is small, these hubs suppress all mutations and gene product concentrations do not change, thus, the pattern is stable. When the environmental pressure increases and the population needs new genotypes, the genetic drift and other effects increase the mutation frequency. For the frequencies that are larger than a critical amount the hubs turn off; and as a result, many mutations can affect phenotype. This effect can serve as an engine for evolution. We show that this engine is very effective: the evolution acceleration is an exponential function of gene redundancy. Finally, we show that the Eldredge-Gould concept of punctuated evolution results from the network architecture, which provides fast evolution, control of evolvability, and pattern robustness. To describe analytically the effect of exponential acceleration, we use mathematical methods developed recently for hard combinatorial problems, in particular, for so-called k-SAT problem, and numerical simulations. PMID:24996115
Costa, Fabiane Pinho; Caldas, Sergiane Souza; Primel, Ednei Gilberto
2014-12-15
Original, citrate and acetate QuEChERS methods were studied in order to evaluate the extraction efficiency and the matrix effect in the extraction of pesticides from canned peach samples. Determinations were performed by gas chromatography coupled to mass spectrometry (GC-MS). The proposed method with extraction using the original QuEChERS method and determination by GC-MS was validated. LOQs ranged between 1 and 10 μg kg(-1) and all analytical curves showed r values higher than 0.99. Recovery values varied from 69% to 125% with RSDs less than 20%. The matrix effect was evaluated and most compounds showed signal enrichment. Robustness was demonstrated using fresh peaches, which provided recovery values within acceptable limits. The applicability of the method was verified and residues of tebuconazole and dimethoate were found in the samples. Copyright © 2014 Elsevier Ltd. All rights reserved.
Prasad, Thatipamula R; Joseph, Siji; Kole, Prashant; Kumar, Anoop; Subramanian, Murali; Rajagopalan, Sudha; Kr, Prabhakar
2017-11-01
Objective of the current work was to develop a 'green chemistry' compliant selective and sensitive supercritical fluid chromatography-tandem mass spectrometry method for simultaneous estimation of risperidone (RIS) and its chiral metabolites in rat plasma. Methodology & results: Agilent 1260 Infinity analytical supercritical fluid chromatography system resolved RIS and its chiral metabolites within runtime of 6 min using a gradient chromatography method. Using a simple protein precipitation sample preparation followed by mass spectrometric detection achieved a sensitivity of 0.92 nM (lower limit of quantification). With linearity over four log units (0.91-7500 nM), the method was found to be selective, accurate, precise and robust. The method was validated and was successfully applied for simultaneous estimation of RIS and 9-hydroxyrisperidone metabolites (R & S individually) after intravenous and per oral administration to rats.
MacDonald, G; Mackenzie, J A; Nolan, M; Insall, R H
2016-03-15
In this paper, we devise a moving mesh finite element method for the approximate solution of coupled bulk-surface reaction-diffusion equations on an evolving two dimensional domain. Fundamental to the success of the method is the robust generation of bulk and surface meshes. For this purpose, we use a novel moving mesh partial differential equation (MMPDE) approach. The developed method is applied to model problems with known analytical solutions; these experiments indicate second-order spatial and temporal accuracy. Coupled bulk-surface problems occur frequently in many areas; in particular, in the modelling of eukaryotic cell migration and chemotaxis. We apply the method to a model of the two-way interaction of a migrating cell in a chemotactic field, where the bulk region corresponds to the extracellular region and the surface to the cell membrane.
Recent Advances in the Measurement of Arsenic, Cadmium, and Mercury in Rice and Other Foods
Punshon, Tracy
2015-01-01
Trace element analysis of foods is of increasing importance because of raised consumer awareness and the need to evaluate and establish regulatory guidelines for toxic trace metals and metalloids. This paper reviews recent advances in the analysis of trace elements in food, including challenges, state-of-the art methods, and use of spatially resolved techniques for localizing the distribution of As and Hg within rice grains. Total elemental analysis of foods is relatively well-established but the push for ever lower detection limits requires that methods be robust from potential matrix interferences which can be particularly severe for food. Inductively coupled plasma mass spectrometry (ICP-MS) is the method of choice, allowing for multi-element and highly sensitive analyses. For arsenic, speciation analysis is necessary because the inorganic forms are more likely to be subject to regulatory limits. Chromatographic techniques coupled to ICP-MS are most often used for arsenic speciation and a range of methods now exist for a variety of different arsenic species in different food matrices. Speciation and spatial analysis of foods, especially rice, can also be achieved with synchrotron techniques. Sensitive analytical techniques and methodological advances provide robust methods for the assessment of several metals in animal and plant-based foods, in particular for arsenic, cadmium and mercury in rice and arsenic speciation in foodstuffs. PMID:25938012
Patel, Bhumit A; Pinto, Nuno D S; Gospodarek, Adrian; Kilgore, Bruce; Goswami, Kudrat; Napoli, William N; Desai, Jayesh; Heo, Jun H; Panzera, Dominick; Pollard, David; Richardson, Daisy; Brower, Mark; Richardson, Douglas D
2017-11-07
Combining process analytical technology (PAT) with continuous production provides a powerful tool to observe and control monoclonal antibody (mAb) fermentation and purification processes. This work demonstrates on-line liquid chromatography (on-line LC) as a PAT tool for monitoring a continuous biologics process and forced degradation studies. Specifically, this work focused on ion exchange chromatography (IEX), which is a critical separation technique to detect charge variants. Product-related impurities, including charge variants, that impact function are classified as critical quality attributes (CQAs). First, we confirmed no significant differences were observed in the charge heterogeneity profile of a mAb through both at-line and on-line sampling and that the on-line method has the ability to rapidly detect changes in protein quality over time. The robustness and versatility of the PAT methods were tested by sampling from two purification locations in a continuous mAb process. The PAT IEX methods used with on-line LC were a weak cation exchange (WCX) separation and a newly developed shorter strong cation exchange (SCX) assay. Both methods provided similar results with the distribution of percent acidic, main, and basic species remaining unchanged over a 2 week period. Second, a forced degradation study showed an increase in acidic species and a decrease in basic species when sampled on-line over 7 days. These applications further strengthen the use of on-line LC to monitor CQAs of a mAb continuously with various PAT IEX analytical methods. Implementation of on-line IEX will enable faster decision making during process development and could potentially be applied to control in biomanufacturing.
Robust optimization of front members in a full frontal car impact
NASA Astrophysics Data System (ADS)
Aspenberg (né Lönn), David; Jergeus, Johan; Nilsson, Larsgunnar
2013-03-01
In the search for lightweight automobile designs, it is necessary to assure that robust crashworthiness performance is achieved. Structures that are optimized to handle a finite number of load cases may perform poorly when subjected to various dispersions. Thus, uncertainties must be accounted for in the optimization process. This article presents an approach to optimization where all design evaluations include an evaluation of the robustness. Metamodel approximations are applied both to the design space and the robustness evaluations, using artifical neural networks and polynomials, respectively. The features of the robust optimization approach are displayed in an analytical example, and further demonstrated in a large-scale design example of front side members of a car. Different optimization formulations are applied and it is shown that the proposed approach works well. It is also concluded that a robust optimization puts higher demands on the finite element model performance than normally.
How Robust is Your System Resilience?
NASA Astrophysics Data System (ADS)
Homayounfar, M.; Muneepeerakul, R.
2017-12-01
Robustness and resilience are concepts in system thinking that have grown in importance and popularity. For many complex social-ecological systems, however, robustness and resilience are difficult to quantify and the connections and trade-offs between them difficult to study. Most studies have either focused on qualitative approaches to discuss their connections or considered only one of them under particular classes of disturbances. In this study, we present an analytical framework to address the linkage between robustness and resilience more systematically. Our analysis is based on a stylized dynamical model that operationalizes a widely used concept framework for social-ecological systems. The model enables us to rigorously define robustness and resilience and consequently investigate their connections. The results reveal the tradeoffs among performance, robustness, and resilience. They also show how the nature of the such tradeoffs varies with the choices of certain policies (e.g., taxation and investment in public infrastructure), internal stresses and external disturbances.
Analyzing capture zone distributions (CZD) in growth: Theory and applications
NASA Astrophysics Data System (ADS)
Einstein, Theodore L.; Pimpinelli, Alberto; Luis González, Diego
2014-09-01
We have argued that the capture-zone distribution (CZD) in submonolayer growth can be well described by the generalized Wigner distribution (GWD) P(s) =asβ exp(-bs2), where s is the CZ area divided by its average value. This approach offers arguably the most robust (least sensitive to mass transport) method to find the critical nucleus size i, since β ≈ i + 2. Various analytical and numerical investigations, which we discuss, show that the simple GWD expression is inadequate in the tails of the distribution, it does account well for the central regime 0.5 < s < 2, where the data is sufficiently large to be reliably accessible experimentally. We summarize and catalog the many experiments in which this method has been applied.
Nonlinear dynamics of mini-satellite respinup by weak internal controllable torques
NASA Astrophysics Data System (ADS)
Somov, Yevgeny
2014-12-01
Contemporary space engineering advanced new problem before theoretical mechanics and motion control theory: a spacecraft directed respinup by the weak restricted control internal forces. The paper presents some results on this problem, which is very actual for energy supply of information mini-satellites (for communication, geodesy, radio- and opto-electronic observation of the Earth et al.) with electro-reaction plasma thrusters and gyro moment cluster based on the reaction wheels or the control moment gyros. The solution achieved is based on the methods for synthesis of nonlinear robust control and on rigorous analytical proof for the required spacecraft rotation stability by Lyapunov function method. These results were verified by a computer simulation of strongly nonlinear oscillatory processes at respinuping of a flexible spacecraft.
Zastepa, Arthur; Pick, Frances R; Blais, Jules M; Saleem, Ammar
2015-05-04
The fate and persistence of microcystin cyanotoxins in aquatic ecosystems remains poorly understood in part due to the lack of analytical methods for microcystins in sediments. Existing methods have been limited to the extraction of a few extracellular microcystins of similar chemistry. We developed a single analytical method, consisting of accelerated solvent extraction, hydrophilic-lipophilic balance solid phase extraction, and reversed phase high performance liquid chromatography-tandem mass spectrometry, suitable for the extraction and quantitation of both intracellular and extracellular cyanotoxins in sediments as well as pore waters. Recoveries of nine microcystins, representing the chemical diversity of microcystins, and nodularin (a marine analogue) ranged between 75 and 98% with one, microcystin-RR (MC-RR), at 50%. Chromatographic separation of these analytes was achieved within 7.5 min and the method detection limits were between 1.1 and 2.5 ng g(-1) dry weight (dw). The robustness of the method was demonstrated on sediment cores collected from seven Canadian lakes of diverse geography and trophic states. Individual microcystin variants reached a maximum concentration of 829 ng g(-1) dw on sediment particles and 132 ng mL(-1) in pore waters and could be detected in sediments as deep as 41 cm (>100 years in age). MC-LR, -RR, and -LA were more often detected while MC-YR, -LY, -LF, and -LW were less common. The analytical method enabled us to estimate sediment-pore water distribution coefficients (K(d)), MC-RR had the highest affinity for sediment particles (log K(d)=1.3) while MC-LA had the lowest affinity (log K(d)=-0.4), partitioning mainly into pore waters. Our findings confirm that sediments serve as a reservoir for microcystins but suggest that some variants may diffuse into overlying water thereby constituting a new route of exposure following the dissipation of toxic blooms. The method is well suited to determine the fate and persistence of different microcystins in aquatic systems. Copyright © 2015 Elsevier B.V. All rights reserved.
Surface enhanced Raman spectroscopy based nanoparticle assays for rapid, point-of-care diagnostics
NASA Astrophysics Data System (ADS)
Driscoll, Ashley J.
Nucleotide and immunoassays are important tools for disease diagnostics. Many of the current laboratory-based analytical diagnostic techniques require multiple assay steps and long incubation times before results are acquired. In the development of bioassays designed for detecting the emergence and spread of diseases in point-of-care (POC) and remote settings, more rapid and portable analytical methods are necessary. Nanoparticles provide simple and reproducible synthetic methods for the preparation of substrates that can be applied in colloidal assays, providing gains in kinetics due to miniaturization and plasmonic substrates for surface enhanced spectroscopies. Specifically, surface enhanced Raman spectroscopy (SERS) is finding broad application as a signal transduction method in immunological and nucleotide assays due to the production of narrow spectral peaks from the scattering molecules and the potential for simultaneous multiple analyte detection. The application of SERS to a no-wash, magnetic capture assay for the detection of West Nile Virus Envelope and Rift Valley Fever Virus N antigens is described. The platform utilizes colloid based capture of the target antigen in solution, magnetic collection of the immunocomplexes and acquisition of SERS spectra by a handheld Raman spectrometer. The reagents for a core-shell nanoparticle, SERS based assay designed for the capture of target microRNA implicated in acute myocardial infarction are also characterized. Several new, small molecule Raman scatterers are introduced and used to analyze the enhancing properties of the synthesized gold coated-magnetic nanoparticles. Nucleotide and immunoassay platforms have shown improvements in speed and analyte capture through the miniaturization of the capture surface and particle-based capture systems can provide a route to further surface miniaturization. A reaction-diffusion model of the colloidal assay platform is presented to understand the interplay of system parameters such as particle diameter, initial analyte concentration and dissociation constants. The projected sensitivities over a broad range of assay conditions are examined and the governing regime of particle systems reported. The results provide metrics in the design of more robust analytics that are of particular interest for POC diagnostics.
Patel, Prinesh N; Karakam, Vijaya Saradhi; Samanthula, Gananadhamu; Ragampeta, Srinivas
2015-10-01
Quality-by-design-based methods hold greater level of confidence for variations and greater success in method transfer. A quality-by-design-based ultra high performance liquid chromatography method was developed for the simultaneous assay of sumatriptan and naproxen along with their related substances. The first screening was performed by fractional factorial design comprising 44 experiments for reversed-phase stationary phases, pH, and organic modifiers. The results of screening design experiments suggested phenyl hexyl column and acetonitrile were the best combination. The method was further optimized for flow rate, temperature, and gradient time by experimental design of 20 experiments and the knowledge space was generated for effect of variable on response (number of peaks ≥ 1.50 - resolution). Proficient design space was generated from knowledge space by applying Monte Carlo simulation to successfully integrate quantitative robustness metrics during optimization stage itself. The final method provided the robust performance which was verified and validated. Final conditions comprised Waters® Acquity phenyl hexyl column with gradient elution using ammonium acetate (pH 4.12, 0.02 M) buffer and acetonitrile at 0.355 mL/min flow rate and 30°C. The developed method separates all 13 analytes within a 15 min run time with fewer experiments compared to the traditional quality-by-testing approach. ©2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robustness of meta-analyses in finding gene × environment interactions
Shi, Gang; Nehorai, Arye
2017-01-01
Meta-analyses that synthesize statistical evidence across studies have become important analytical tools for genetic studies. Inspired by the success of genome-wide association studies of the genetic main effect, researchers are searching for gene × environment interactions. Confounders are routinely included in the genome-wide gene × environment interaction analysis as covariates; however, this does not control for any confounding effects on the results if covariate × environment interactions are present. We carried out simulation studies to evaluate the robustness to the covariate × environment confounder for meta-regression and joint meta-analysis, which are two commonly used meta-analysis methods for testing the gene × environment interaction or the genetic main effect and interaction jointly. Here we show that meta-regression is robust to the covariate × environment confounder while joint meta-analysis is subject to the confounding effect with inflated type I error rates. Given vast sample sizes employed in genome-wide gene × environment interaction studies, non-significant covariate × environment interactions at the study level could substantially elevate the type I error rate at the consortium level. When covariate × environment confounders are present, type I errors can be controlled in joint meta-analysis by including the covariate × environment terms in the analysis at the study level. Alternatively, meta-regression can be applied, which is robust to potential covariate × environment confounders. PMID:28362796
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Chen, Jie; Tabatabaei, Ali; Zook, Doug; Wang, Yan; Danks, Anne; Stauber, Kathe
2017-11-30
A robust high-performance liquid chromatography tandem mass spectrometry (LC-MS/MS) assay was developed and qualified for the measurement of cyclic nucleotides (cNTs) in rat brain tissue. Stable isotopically labeled 3',5'-cyclic adenosine- 13 C 5 monophosphate ( 13 C 5 -cAMP) and 3',5'-cyclic guanosine- 13 C, 15 N 2 monophosphate ( 13 C 15 N 2 -cGMP) were used as surrogate analytes to measure endogenous 3',5'-cyclic adenosine monophosphate (cAMP) and 3',5'-cyclic guanosine monophosphate (cGMP). Pre-weighed frozen rat brain samples were rapidly homogenized in 0.4M perchloric acid at a ratio of 1:4 (w/v). Following internal standard addition and dilution, the resulting extracts were analyzed using negative ion mode electrospray ionization LC-MS/MS. The calibration curves for both analytes ranged from 5 to 2000ng/g and showed excellent linearity (r 2 >0.996). Relative surrogate analyte-to-analyte LC-MS/MS responses were determined to correct concentrations derived from the surrogate curves. The intra-run precision (CV%) for 13 C 5 -cAMP and 13 C 15 N 2 -cGMP was below 6.6% and 7.4%, respectively, while the inter-run precision (CV%) was 8.5% and 5.8%, respectively. The intra-run accuracy (Dev%) for 13 C 5 -cAMP and 13 C 15 N 2 -cGMP was <11.9% and 10.3%, respectively, and the inter-run Dev% was <6.8% and 5.5%, respectively. Qualification experiments demonstrated high analyte recoveries, minimal matrix effects and low autosampler carryover. Acceptable frozen storage, freeze/thaw, benchtop, processed sample and autosampler stability were shown in brain sample homogenates as well as post-processed samples. The method was found to be suitable for the analysis of rat brain tissue cAMP and cGMP levels in preclinical biomarker development studies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Muñoz-Guerra, J A; Prado, P; García-Tenorio, S Vargas
2011-10-14
Due to the impact in the media and the requirements of sensitivity and robustness, the detection of the misuse of forbidden substances in sports is a really challenging area for analytical chemistry, where any study focused on enhancing the performance of the analytical methods will be of great interest. The aim of the present study was to evaluate the usefulness of using hydrogen instead of helium as a carrier gas for the analysis of anabolic steroids by gas chromatography-mass spectrometry with electron ionization. There are several drawbacks related with the use of helium as a carrier gas: it is expensive, is a non-renewable resource, and has limited availability in many parts of the world. In contrast, hydrogen is readily available using a hydrogen generator or high-pressure bottled gas, and allows a faster analysis without loss of efficiency; nevertheless it should not be forgotten that due to its explosiveness hydrogen must be handled with caution. Throughout the study the impact of the change of the carrier gas will be evaluated in terms of: performance of the chromatographic system, saving of time and money, impact on the high vacuum in the analyzer, changes in the fragmentation behaviour of the analytes, and finally consequences for the limits of detection achieved with the method. Copyright © 2011 Elsevier B.V. All rights reserved.
Mata-Granados, J M; Quesada Gómez, J M; Luque de Castro, M D
2009-05-01
Fat soluble vitamins and vitamin D metabolites are key compounds in bone metabolism. Unfortunately, variability among 25(OH)D assays limits clinician ability to monitor vitamin D status, supplementation, and toxicity. 0.5 ml serum was mixed with 0.5 ml 60% acetonitrile 150 mM sodium dodecyl sulfate, vortexed for 30 s and injected into an automatic solid-phase extraction (SPE) system for cleanup-preconcentration, then on-line transferred to a reversed-phase analytical column by a 15% methanol-acetonitrile mobile phase at 1.0 ml/min for individual separation of the target analytes. Ultraviolet detection was performed at 265 nm, 325 nm and 292 for vitamin D metabolites, vitamin A and alpha- and delta-tocopherols, respectively. Detection limits were between 0.0015 and 0.26 microg/ml for the target compounds, the precision (expressed as relative standard deviation) between 0.83 and 3.6% for repeatability and between 1.8 and 4.62% for within laboratory reproducibility. Recoveries between 97-100.2% and 95-99% were obtained for low and high concentrations of the target analytes in serum. The total analysis time was 20 min. The on-line coupling of SPE-HPLC endows the proposed method with reliability, robustness, and user unattendance, making it a useful tool for high-throughput analysis in clinical and research laboratories.
Van Dam, Debby; Vermeiren, Yannick; Aerts, Tony; De Deyn, Peter Paul
2014-08-01
A fast and simple RP-HPLC method with electrochemical detection (ECD) and ion pair chromatography was developed, optimized and validated in order to simultaneously determine eight different biogenic amines and metabolites in post-mortem human brain tissue in a single-run analytical approach. The compounds of interest are the indolamine serotonin (5-hydroxytryptamine, 5-HT), the catecholamines dopamine (DA) and (nor)epinephrine ((N)E), as well as their respective metabolites, i.e. 3,4-dihydroxyphenylacetic acid (DOPAC) and homovanillic acid (HVA), 5-hydroxy-3-indoleacetic acid (5-HIAA) and 3-methoxy-4-hydroxyphenylglycol (MHPG). A two-level fractional factorial experimental design was applied to study the effect of five experimental factors (i.e. the ion-pair counter concentration, the level of organic modifier, the pH of the mobile phase, the temperature of the column, and the voltage setting of the detector) on the chromatographic behaviour. The cross effect between the five quantitative factors and the capacity and separation factors of the analytes were then analysed using a Standard Least Squares model. The optimized method was fully validated according to the requirements of SFSTP (Société Française des Sciences et Techniques Pharmaceutiques). Our human brain tissue sample preparation procedure is straightforward and relatively short, which allows samples to be loaded onto the HPLC system within approximately 4h. Additionally, a high sample throughput was achieved after optimization due to a total runtime of maximally 40min per sample. The conditions and settings of the HPLC system were found to be accurate with high intra and inter-assay repeatability, recovery and accuracy rates. The robust analytical method results in very low detection limits and good separation for all of the eight biogenic amines and metabolites in this complex mixture of biological analytes. Copyright © 2014 Elsevier B.V. All rights reserved.
A Model of Risk Analysis in Analytical Methodology for Biopharmaceutical Quality Control.
Andrade, Cleyton Lage; Herrera, Miguel Angel De La O; Lemes, Elezer Monte Blanco
2018-01-01
One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA. To determine small amounts of DNA (around 100 pg) that may be in a biologically derived drug substance, an analytical method should be sensitive, robust, reliable, and accurate. In principle, three techniques have the ability to measure residual cellular DNA: radioactive dot-blot, a type of hybridization; threshold analysis; and quantitative polymerase chain reaction. Quality risk management is a systematic process for evaluating, controlling, and reporting of risks that may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates, by quality risk management, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool hazard analysis and critical control points. This tool provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we conclude that the radioactive dot-blot assay has the largest number of critical control points, followed by quantitative polymerase chain reaction, and threshold analysis. From the analysis of hazards (i.e., points of method failure) and the associated method procedure critical control points, we conclude that the analytical methodology with the lowest risk for performance failure for residual cellular DNA testing is quantitative polymerase chain reaction. LAY ABSTRACT: In order to mitigate the risk of adverse events by residual cellular DNA that is not completely cleared from downstream production processes, regulatory agencies have required the industry to guarantee a very low level of DNA in biologically derived pharmaceutical products. The technique historically used was radioactive blot hybridization. However, the technique is a challenging method to implement in a quality control laboratory: It is laborious, time consuming, semi-quantitative, and requires a radioisotope. Along with dot-blot hybridization, two alternatives techniques were evaluated: threshold analysis and quantitative polymerase chain reaction. Quality risk management tools were applied to compare the techniques, taking into account the uncertainties, the possibility of circumstances or future events, and their effects upon method performance. By illustrating the application of these tools with DNA methods, we provide an example of how they can be used to support a scientific and practical approach to decision making and can assess and manage method performance risk using such tools. This paper discusses, considering the principles of quality risk management, an additional approach to the development and selection of analytical quality control methods using the risk analysis tool hazard analysis and critical control points. This tool provides the possibility to find the method procedural steps with higher impact on method reliability (called critical control points). Our model concluded that the radioactive dot-blot assay has the larger number of critical control points, followed by quantitative polymerase chain reaction and threshold analysis. Quantitative polymerase chain reaction is shown to be the better alternative analytical methodology in residual cellular DNA analysis. © PDA, Inc. 2018.
NASA Astrophysics Data System (ADS)
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2014-12-01
We developed a scaling-based, simple empirical model for spatio-temporally robust prediction of the diurnal cycles of wetland net ecosystem exchange (NEE) by using an extended stochastic harmonic algorithm (ESHA). A reference-time observation from each diurnal cycle was utilized as the scaling parameter to normalize and collapse hourly observed NEE of different days into a single, dimensionless diurnal curve. The modeling concept was tested by parameterizing the unique diurnal curve and predicting hourly NEE of May to October (summer growing and fall seasons) between 2002-12 for diverse wetland ecosystems, as available in the U.S. AmeriFLUX network. As an example, the Taylor Slough short hydroperiod marsh site in the Florida Everglades had data for four consecutive growing seasons from 2009-12; results showed impressive modeling efficiency (coefficient of determination, R2 = 0.66) and accuracy (ratio of root-mean-square-error to the standard deviation of observations, RSR = 0.58). Model validation was performed with an independent year of NEE data, indicating equally impressive performance (R2 = 0.68, RSR = 0.57). The model included a parsimonious set of estimated parameters, which exhibited spatio-temporal robustness by collapsing onto narrow ranges. Model robustness was further investigated by analytically deriving and quantifying parameter sensitivity coefficients and a first-order uncertainty measure. The relatively robust, empirical NEE model can be applied for simulating continuous (e.g., hourly) NEE time-series from a single reference observation (or a set of limited observations) at different wetland sites of comparable hydro-climatology, biogeochemistry, and ecology. The method can also be used for a robust gap-filling of missing data in observed time-series of periodic ecohydrological variables for wetland or other ecosystems.
Measuring coral reef decline through meta-analyses
Côté, I.M; Gill, J.A; Gardner, T.A; Watkinson, A.R
2005-01-01
Coral reef ecosystems are in decline worldwide, owing to a variety of anthropogenic and natural causes. One of the most obvious signals of reef degradation is a reduction in live coral cover. Past and current rates of loss of coral are known for many individual reefs; however, until recently, no large-scale estimate was available. In this paper, we show how meta-analysis can be used to integrate existing small-scale estimates of change in coral and macroalgal cover, derived from in situ surveys of reefs, to generate a robust assessment of long-term patterns of large-scale ecological change. Using a large dataset from Caribbean reefs, we examine the possible biases inherent in meta-analytical studies and the sensitivity of the method to patchiness in data availability. Despite the fact that our meta-analysis included studies that used a variety of sampling methods, the regional estimate of change in coral cover we obtained is similar to that generated by a standardized survey programme that was implemented in 1991 in the Caribbean. We argue that for habitat types that are regularly and reasonably well surveyed in the course of ecological or conservation research, meta-analysis offers a cost-effective and rapid method for generating robust estimates of past and current states. PMID:15814352
Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.
Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen
2017-09-04
The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.
Evaluating the compatibility of multi-functional and intensive urban land uses
NASA Astrophysics Data System (ADS)
Taleai, M.; Sharifi, A.; Sliuzas, R.; Mesgari, M.
2007-12-01
This research is aimed at developing a model for assessing land use compatibility in densely built-up urban areas. In this process, a new model was developed through the combination of a suite of existing methods and tools: geographical information system, Delphi methods and spatial decision support tools: namely multi-criteria evaluation analysis, analytical hierarchy process and ordered weighted average method. The developed model has the potential to calculate land use compatibility in both horizontal and vertical directions. Furthermore, the compatibility between the use of each floor in a building and its neighboring land uses can be evaluated. The method was tested in a built-up urban area located in Tehran, the capital city of Iran. The results show that the model is robust in clarifying different levels of physical compatibility between neighboring land uses. This paper describes the various steps and processes of developing the proposed land use compatibility evaluation model (CEM).
Confidence limits for data mining models of options prices
NASA Astrophysics Data System (ADS)
Healy, J. V.; Dixon, M.; Read, B. J.; Cai, F. F.
2004-12-01
Non-parametric methods such as artificial neural nets can successfully model prices of financial options, out-performing the Black-Scholes analytic model (Eur. Phys. J. B 27 (2002) 219). However, the accuracy of such approaches is usually expressed only by a global fitting/error measure. This paper describes a robust method for determining prediction intervals for models derived by non-linear regression. We have demonstrated it by application to a standard synthetic example (29th Annual Conference of the IEEE Industrial Electronics Society, Special Session on Intelligent Systems, pp. 1926-1931). The method is used here to obtain prediction intervals for option prices using market data for LIFFE “ESX” FTSE 100 index options ( http://www.liffe.com/liffedata/contracts/month_onmonth.xls). We avoid special neural net architectures and use standard regression procedures to determine local error bars. The method is appropriate for target data with non constant variance (or volatility).
An immersed boundary method for modeling a dirty geometry data
NASA Astrophysics Data System (ADS)
Onishi, Keiji; Tsubokura, Makoto
2017-11-01
We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.
Inorganic chemical analysis of environmental materials—A lecture series
Crock, J.G.; Lamothe, P.J.
2011-01-01
At the request of the faculty of the Colorado School of Mines, Golden, Colorado, the authors prepared and presented a lecture series to the students of a graduate level advanced instrumental analysis class. The slides and text presented in this report are a compilation and condensation of this series of lectures. The purpose of this report is to present the slides and notes and to emphasize the thought processes that should be used by a scientist submitting samples for analyses in order to procure analytical data to answer a research question. First and foremost, the analytical data generated can be no better than the samples submitted. The questions to be answered must first be well defined and the appropriate samples collected from the population that will answer the question. The proper methods of analysis, including proper sample preparation and digestion techniques, must then be applied. Care must be taken to achieve the required limits of detection of the critical analytes to yield detectable analyte concentration (above "action" levels) for the majority of the study's samples and to address what portion of those analytes answer the research question-total or partial concentrations. To guarantee a robust analytical result that answers the research question(s), a well-defined quality assurance and quality control (QA/QC) plan must be employed. This QA/QC plan must include the collection and analysis of field and laboratory blanks, sample duplicates, and matrix-matched standard reference materials (SRMs). The proper SRMs may include in-house materials and/or a selection of widely available commercial materials. A discussion of the preparation and applicability of in-house reference materials is also presented. Only when all these analytical issues are sufficiently addressed can the research questions be answered with known certainty.
Experimental design and statistical methods for improved hit detection in high-throughput screening.
Malo, Nathalie; Hanley, James A; Carlile, Graeme; Liu, Jing; Pelletier, Jerry; Thomas, David; Nadon, Robert
2010-09-01
Identification of active compounds in high-throughput screening (HTS) contexts can be substantially improved by applying classical experimental design and statistical inference principles to all phases of HTS studies. The authors present both experimental and simulated data to illustrate how true-positive rates can be maximized without increasing false-positive rates by the following analytical process. First, the use of robust data preprocessing methods reduces unwanted variation by removing row, column, and plate biases. Second, replicate measurements allow estimation of the magnitude of the remaining random error and the use of formal statistical models to benchmark putative hits relative to what is expected by chance. Receiver Operating Characteristic (ROC) analyses revealed superior power for data preprocessed by a trimmed-mean polish method combined with the RVM t-test, particularly for small- to moderate-sized biological hits.
Disentangling Random Motion and Flow in a Complex Medium
Koslover, Elena F.; Chan, Caleb K.; Theriot, Julie A.
2016-01-01
We describe a technique for deconvolving the stochastic motion of particles from large-scale fluid flow in a dynamic environment such as that found in living cells. The method leverages the separation of timescales to subtract out the persistent component of motion from single-particle trajectories. The mean-squared displacement of the resulting trajectories is rescaled so as to enable robust extraction of the diffusion coefficient and subdiffusive scaling exponent of the stochastic motion. We demonstrate the applicability of the method for characterizing both diffusive and fractional Brownian motion overlaid by flow and analytically calculate the accuracy of the method in different parameter regimes. This technique is employed to analyze the motion of lysosomes in motile neutrophil-like cells, showing that the cytoplasm of these cells behaves as a viscous fluid at the timescales examined. PMID:26840734
A robust and efficient stepwise regression method for building sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader
2017-03-01
Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less
Myocardial strains from 3D displacement encoded magnetic resonance imaging
2012-01-01
Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791
Percolation of localized attack on complex networks
NASA Astrophysics Data System (ADS)
Shao, Shuai; Huang, Xuqing; Stanley, H. Eugene; Havlin, Shlomo
2015-02-01
The robustness of complex networks against node failure and malicious attack has been of interest for decades, while most of the research has focused on random attack or hub-targeted attack. In many real-world scenarios, however, attacks are neither random nor hub-targeted, but localized, where a group of neighboring nodes in a network are attacked and fail. In this paper we develop a percolation framework to analytically and numerically study the robustness of complex networks against such localized attack. In particular, we investigate this robustness in Erdős-Rényi networks, random-regular networks, and scale-free networks. Our results provide insight into how to better protect networks, enhance cybersecurity, and facilitate the design of more robust infrastructures.
Khurana, Rajneet Kaur; Rao, Satish; Beg, Sarwar; Katare, O P; Singh, Bhupinder
2016-01-01
The present work aims at the systematic development of a simple, rapid and highly sensitive densitometry-based thin-layer chromatographic method for the quantification of mangiferin in bioanalytical samples. Initially, the quality target method profile was defined and critical analytical attributes (CAAs) earmarked, namely, retardation factor (Rf), peak height, capacity factor, theoretical plates and separation number. Face-centered cubic design was selected for optimization of volume loaded and plate dimensions as the critical method parameters selected from screening studies employing D-optimal and Plackett-Burman design studies, followed by evaluating their effect on the CAAs. The mobile phase containing a mixture of ethyl acetate : acetic acid : formic acid : water in a 7 : 1 : 1 : 1 (v/v/v/v) ratio was finally selected as the optimized solvent for apt chromatographic separation of mangiferin at 262 nm withRf 0.68 ± 0.02 and all other parameters within the acceptance limits. Method validation studies revealed high linearity in the concentration range of 50-800 ng/band for mangiferin. The developed method showed high accuracy, precision, ruggedness, robustness, specificity, sensitivity, selectivity and recovery. In a nutshell, the bioanalytical method for analysis of mangiferin in plasma revealed the presence of well-resolved peaks and high recovery of mangiferin. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Teachable, high-content analytics for live-cell, phase contrast movies.
Alworth, Samuel V; Watanabe, Hirotada; Lee, James S J
2010-09-01
CL-Quant is a new solution platform for broad, high-content, live-cell image analysis. Powered by novel machine learning technologies and teach-by-example interfaces, CL-Quant provides a platform for the rapid development and application of scalable, high-performance, and fully automated analytics for a broad range of live-cell microscopy imaging applications, including label-free phase contrast imaging. The authors used CL-Quant to teach off-the-shelf universal analytics, called standard recipes, for cell proliferation, wound healing, cell counting, and cell motility assays using phase contrast movies collected on the BioStation CT and BioStation IM platforms. Similar to application modules, standard recipes are intended to work robustly across a wide range of imaging conditions without requiring customization by the end user. The authors validated the performance of the standard recipes by comparing their performance with truth created manually, or by custom analytics optimized for each individual movie (and therefore yielding the best possible result for the image), and validated by independent review. The validation data show that the standard recipes' performance is comparable with the validated truth with low variation. The data validate that the CL-Quant standard recipes can provide robust results without customization for live-cell assays in broad cell types and laboratory settings.
Analytical Parameters of an Amperometric Glucose Biosensor for Fast Analysis in Food Samples.
Artigues, Margalida; Abellà, Jordi; Colominas, Sergi
2017-11-14
Amperometric biosensors based on the use of glucose oxidase (GOx) are able to combine the robustness of electrochemical techniques with the specificity of biological recognition processes. However, very little information can be found in literature about the fundamental analytical parameters of these sensors. In this work, the analytical behavior of an amperometric biosensor based on the immobilization of GOx using a hydrogel (Chitosan) onto highly ordered titanium dioxide nanotube arrays (TiO₂NTAs) has been evaluated. The GOx-Chitosan/TiO₂NTAs biosensor showed a sensitivity of 5.46 μA·mM -1 with a linear range from 0.3 to 1.5 mM; its fundamental analytical parameters were studied using a commercial soft drink. The obtained results proved sufficient repeatability (RSD = 1.9%), reproducibility (RSD = 2.5%), accuracy (95-105% recovery), and robustness (RSD = 3.3%). Furthermore, no significant interferences from fructose, ascorbic acid and citric acid were obtained. In addition, the storage stability was further examined, after 30 days, the GOx-Chitosan/TiO₂NTAs biosensor retained 85% of its initial current response. Finally, the glucose content of different food samples was measured using the biosensor and compared with the respective HPLC value. In the worst scenario, a deviation smaller than 10% was obtained among the 20 samples evaluated.
Vilmin, Franck; Dussap, Claude; Coste, Nathalie
2006-06-01
In the tire industry, synthetic styrene-butadiene rubber (SBR), butadiene rubber (BR), and isoprene rubber (IR) elastomers are essential for conferring to the product its properties of grip and rolling resistance. Their physical properties depend on their chemical composition, i. e., their microstructure and styrene content, which must be accurately controlled. This paper describes a fast, robust, and highly reproducible near-infrared analytical method for the quantitative determination of the microstructure and styrene content. The quantitative models are calculated with the help of pure spectral profiles estimated from a partial least squares (PLS) regression, using (13)C nuclear magnetic resonance (NMR) as the reference method. This versatile approach allows the models to be applied over a large range of compositions, from a single BR to an SBR-IR blend. The resulting quantitative predictions are independent of the sample path length. As a consequence, the sample preparation is solvent free and simplified with a very fast (five minutes) hot filming step of a bulk polymer piece. No precise thickness control is required. Thus, the operator effect becomes negligible and the method is easily transferable. The root mean square error of prediction, depending on the rubber composition, is between 0.7% and 1.3%. The reproducibility standard error is less than 0.2% in every case.
NASA Technical Reports Server (NTRS)
Wieseman, Carol D.; Christhilf, David; Perry, Boyd, III
2012-01-01
An important objective of the Semi-Span Super-Sonic Transport (S4T) wind tunnel model program was the demonstration of Flutter Suppression (FS), Gust Load Alleviation (GLA), and Ride Quality Enhancement (RQE). It was critical to evaluate the stability and robustness of these control laws analytically before testing them and experimentally while testing them to ensure safety of the model and the wind tunnel. MATLAB based software was applied to evaluate the performance of closed-loop systems in terms of stability and robustness. Existing software tools were extended to use analytical representations of the S4T and the control laws to analyze and evaluate the control laws prior to testing. Lessons were learned about the complex windtunnel model and experimental testing. The open-loop flutter boundary was determined from the closed-loop systems. A MATLAB/Simulink Simulation developed under the program is available for future work to improve the CPE process. This paper is one of a series of that comprise a special session, which summarizes the S4T wind-tunnel program.
A Robust, Enzyme-Free Glucose Sensor Based on Lysine-Assisted CuO Nanostructures.
Baloach, Qurrat-Ul-Ain; Tahira, Aneela; Mallah, Arfana Begum; Abro, Muhammad Ishaq; Uddin, Siraj; Willander, Magnus; Ibupoto, Zafar Hussain
2016-11-14
The production of a nanomaterial with enhanced and desirable electrocatalytic properties is of prime importance, and the commercialization of devices containing these materials is a challenging task. In this study, unique cupric oxide (CuO) nanostructures were synthesized using lysine as a soft template for the evolution of morphology via a rapid and boiled hydrothermal method. The morphology and structure of the synthesized CuO nanomaterial were characterized using scanning electron microscopy (SEM) and X-ray diffraction (XRD), respectively. The prepared CuO nanostructures showed high potential for use in the electrocatalytic oxidation of glucose in an alkaline medium. The proposed enzyme-free glucose sensor demonstrated a robust response to glucose with a wide linear range and high sensitivity, selectivity, stability, and reproducibility. To explore its practical feasibility, the glucose content of serum samples was successfully determined using the enzyme-free sensor. An analytical recovery method was used to measure the actual glucose from the serum samples, and the results were satisfactory. Moreover, the presented glucose sensor has high chemical stability and can be reused for repetitive measurements. This study introduces an enzyme-free glucose sensor as an alternative tool for clinical glucose quantification.
Mellert, Hestia S.; Alexander, Kristin E.; Jackson, Leisa P.; Pestano, Gary A.
2018-01-01
We have developed novel methods for the isolation and characterization of tumor-derived circulating ribonucleic acid (cRNA) for blood-based liquid biopsy. Robust detection of cRNA recovered from blood represents a solution to a critical unmet need in clinical diagnostics. The test begins with the collection of whole blood into blood collection tubes containing preservatives that stabilize cRNA. Cell-free, exosomal, and platelet-associated RNA is isolated from plasma in this test system. The cRNA is reverse transcribed to complementary DNA (cDNA) and amplified using digital polymerase chain reaction (dPCR). Samples are evaluated for both the target biomarker as well as a control gene. Test validation included limit of detection, accuracy, and robustness studies with analytic samples. The method developed as a result of these studies reproducibly detect multiple fusion variants for ROS1 (C-Ros proto-oncogene 1; 8 variants) and RET (rearranged during transfection proto-oncogene; 8 variants). The sample processing workflow has been optimized so that test results can consistently be generated within 72 hours of sample receipt. PMID:29683453
A Robust, Enzyme-Free Glucose Sensor Based on Lysine-Assisted CuO Nanostructures
Baloach, Qurrat-ul-Ain; Tahira, Aneela; Mallah, Arfana Begum; Abro, Muhammad Ishaq; Uddin, Siraj; Willander, Magnus; Ibupoto, Zafar Hussain
2016-01-01
The production of a nanomaterial with enhanced and desirable electrocatalytic properties is of prime importance, and the commercialization of devices containing these materials is a challenging task. In this study, unique cupric oxide (CuO) nanostructures were synthesized using lysine as a soft template for the evolution of morphology via a rapid and boiled hydrothermal method. The morphology and structure of the synthesized CuO nanomaterial were characterized using scanning electron microscopy (SEM) and X-ray diffraction (XRD), respectively. The prepared CuO nanostructures showed high potential for use in the electrocatalytic oxidation of glucose in an alkaline medium. The proposed enzyme-free glucose sensor demonstrated a robust response to glucose with a wide linear range and high sensitivity, selectivity, stability, and reproducibility. To explore its practical feasibility, the glucose content of serum samples was successfully determined using the enzyme-free sensor. An analytical recovery method was used to measure the actual glucose from the serum samples, and the results were satisfactory. Moreover, the presented glucose sensor has high chemical stability and can be reused for repetitive measurements. This study introduces an enzyme-free glucose sensor as an alternative tool for clinical glucose quantification. PMID:27854253
Liu, Tao; Gao, Furong
2011-04-01
In view of the deficiencies in existing internal model control (IMC)-based methods for load disturbance rejection for integrating and unstable processes with slow dynamics, a modified IMC-based controller design is proposed to deal with step- or ramp-type load disturbance that is often encountered in engineering practices. By classifying the ways through which such load disturbance enters into the process, analytical controller formulae are correspondingly developed, based on a two-degree-of-freedom (2DOF) control structure that allows for separate optimization of load disturbance rejection from setpoint tracking. An obvious merit is that there is only a single adjustable parameter in the proposed controller, which in essence corresponds to the time constant of the closed-loop transfer function for load disturbance rejection, and can be monotonically tuned to meet a good trade-off between disturbance rejection performance and closed-loop robust stability. At the same time, robust tuning constraints are given to accommodate process uncertainties in practice. Illustrative examples from the recent literature are used to show effectiveness and merits of the proposed method for different cases of load disturbance. Copyright © 2010. Published by Elsevier Ltd.
Karasakal, A; Ulu, S T
2014-05-01
A novel, sensitive and selective spectrofluorimetric method was developed for the determination of tamsulosin in spiked human urine and pharmaceutical preparations. The proposed method is based on the reaction of tamsulosin with 1-dimethylaminonaphthalene-5-sulfonyl chloride in carbonate buffer pH 10.5 to yield a highly fluorescent derivative. The described method was validated and the analytical parameters of linearity, limit of detection (LOD), limit of quantification (LOQ), accuracy, precision, recovery and robustness were evaluated. The proposed method showed a linear dependence of the fluorescence intensity on drug concentration over the range 1.22 × 10(-7) to 7.35 × 10(-6) M. LOD and LOQ were calculated as 1.07 × 10(-7) and 3.23 × 10(-7) M, respectively. The proposed method was successfully applied for the determination of tamsulosin in pharmaceutical preparations and the obtained results were in good agreement with those obtained using the reference method. Copyright © 2013 John Wiley & Sons, Ltd.
New exact solutions for a discrete electrical lattice using the analytical methods
NASA Astrophysics Data System (ADS)
Manafian, Jalil; Lakestani, Mehrdad
2018-03-01
This paper retrieves soliton solutions to an equation in nonlinear electrical transmission lines using the semi-inverse variational principle method (SIVPM), the \\exp(-Ω(ξ)) -expansion method (EEM) and the improved tan(φ/2) -expansion method (ITEM), with the aid of the symbolic computation package Maple. As a result, the SIVPM, EEM and ITEM methods are successfully employed and some new exact solitary wave solutions are acquired in terms of kink-singular soliton solution, hyperbolic solution, trigonometric solution, dark and bright soliton solutions. All solutions have been verified back into their corresponding equations with the aid of the Maple package program. We depicted the physical explanation of the extracted solutions with the choice of different parameters by plotting some 2D and 3D illustrations. Finally, we show that the used methods are robust and more efficient than other methods. More importantly, the solutions found in this work can have significant applications in telecommunication systems where solitons are used to codify data.
Hicks, Michael B; Regalado, Erik L; Tan, Feng; Gong, Xiaoyi; Welch, Christopher J
2016-01-05
Supercritical fluid chromatography (SFC) has long been a preferred method for enantiopurity analysis in support of pharmaceutical discovery and development, but implementation of the technique in regulated GMP laboratories has been somewhat slow, owing to limitations in instrument sensitivity, reproducibility, accuracy and robustness. In recent years, commercialization of next generation analytical SFC instrumentation has addressed previous shortcomings, making the technique better suited for GMP analysis. In this study we investigate the use of modern SFC for enantiopurity analysis of several pharmaceutical intermediates and compare the results with the conventional HPLC approaches historically used for analysis in a GMP setting. The findings clearly illustrate that modern SFC now exhibits improved precision, reproducibility, accuracy and robustness; also providing superior resolution and peak capacity compared to HPLC. Based on these findings, the use of modern chiral SFC is recommended for GMP studies of stereochemistry in pharmaceutical development and manufacturing. Copyright © 2015 Elsevier B.V. All rights reserved.
Elsayed, Mustafa M A; Vierl, Ulrich; Cevc, Gregor
2009-06-01
Potentiometric lipid membrane-water partition coefficient studies neglect electrostatic interactions to date; this leads to incorrect results. We herein show how to account properly for such interactions in potentiometric data analysis. We conducted potentiometric titration experiments to determine lipid membrane-water partition coefficients of four illustrative drugs, bupivacaine, diclofenac, ketoprofen and terbinafine. We then analyzed the results conventionally and with an improved analytical approach that considers Coulombic electrostatic interactions. The new analytical approach delivers robust partition coefficient values. In contrast, the conventional data analysis yields apparent partition coefficients of the ionized drug forms that depend on experimental conditions (mainly the lipid-drug ratio and the bulk ionic strength). This is due to changing electrostatic effects originating either from bound drug and/or lipid charges. A membrane comprising 10 mol-% mono-charged molecules in a 150 mM (monovalent) electrolyte solution yields results that differ by a factor of 4 from uncharged membranes results. Allowance for the Coulombic electrostatic interactions is a prerequisite for accurate and reliable determination of lipid membrane-water partition coefficients of ionizable drugs from potentiometric titration data. The same conclusion applies to all analytical methods involving drug binding to a surface.
Dinov, Ivo D
2016-01-01
Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.
Tchamna, Rodrigue; Lee, Moonyong
2018-01-01
This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Dzakpasu, Susie; Powell-Jackson, Timothy; Campbell, Oona M R
2014-03-01
To assess the evidence of the impact of user fees on maternal health service utilization and related health outcomes in low- and middle-income countries, as well as their impact on inequalities in these outcomes. Studies were identified by modifying a search strategy from a related systematic review. Primary studies of any design were included if they reported the effect of fee changes on maternal health service utilization, related health outcomes and inequalities in these outcomes. For each study, data were systematically extracted and a quality assessment conducted. Due to the heterogeneity of study methods, results were examined narratively. Twenty studies were included. Designs and analytic approaches comprised: two interrupted time series, eight repeated cross-sectional, nine before-and-after without comparison groups and one before-and-after in three groups. Overall, the quality of studies was poor. Few studies addressed potential sources of bias, such as secular trends over time, and even basic tests of statistical significance were often not reported. Consistency in the direction of effects provided some evidence of an increase in facility delivery in particular after fees were removed, as well as possible increases in the number of managed delivery complications. There was little evidence of the effect on health outcomes or inequality in accessing care and, where available, the direction of effect varied. Despite the global momentum to abolish user fees for maternal and child health services, robust evidence quantifying impact remains scant. Improved methods for evaluating and reporting on these interventions are recommended, including better descriptions of the interventions and context, looking at a range of outcome measures, and adopting robust analytical methods that allow for adjustment of underlying and seasonal trends, reporting immediate as well as longer-term (e.g. at 6 months and 1 year) effects and using comparison groups where possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowson, Scott T.; Bruce, Joseph R.; Best, Daniel M.
2009-04-14
This paper presents key components of the Law Enforcement Information Framework (LEIF) that provides communications, situational awareness, and visual analytics tools in a service-oriented architecture supporting web-based desktop and handheld device users. LEIF simplifies interfaces and visualizations of well-established visual analytical techniques to improve usability. Advanced analytics capability is maintained by enhancing the underlying processing to support the new interface. LEIF development is driven by real-world user feedback gathered through deployments at three operational law enforcement organizations in the US. LEIF incorporates a robust information ingest pipeline supporting a wide variety of information formats. LEIF also insulates interface and analyticalmore » components from information sources making it easier to adapt the framework for many different data repositories.« less
Székely, György; Henriques, Bruno; Gil, Marco; Alvarez, Carlos
2014-09-01
This paper discusses a design of experiments (DoE) assisted optimization and robustness testing of a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method development for the trace analysis of the potentially genotoxic 1,3-diisopropylurea (IPU) impurity in mometasone furoate glucocorticosteroid. Compared to the conventional trial-and-error method development, DoE is a cost-effective and systematic approach to system optimization by which the effects of multiple parameters and parameter interactions on a given response are considered. The LC and MS factors were studied simultaneously: flow (F), gradient (G), injection volume (Vinj), cone voltage (E(con)), and collision energy (E(col)). The optimization was carried out with respect to four responses: separation of peaks (Sep), peak area (A(p)), length of the analysis (T), and the signal-to-noise ratio (S/N). An optimization central composite face (CCF) DoE was conducted leading to the early discovery of carry-over effect which was further investigated in order to establish the maximum injectable sample load. A second DoE was conducted in order to obtain the optimal LC-MS/MS method. As part of the validation of the obtained method, its robustness was determined by conducting a fractional factorial of resolution III DoE, wherein column temperature and quadrupole resolution were considered as additional factors. The method utilizes a common Phenomenex Gemini NX C-18 HPLC analytical column with electrospray ionization and a triple quadrupole mass detector in multiple reaction monitoring (MRM) mode, resulting in short analyses with a 10-min runtime. The high sensitivity and low limit of quantification (LOQ) was achieved by (1) MRM mode (instead of single ion monitoring) and (2) avoiding the drawbacks of derivatization (incomplete reaction and time-consuming sample preparation). Quantitatively, the DoE method development strategy resulted in the robust trace analysis of IPU at 1.25 ng/mL absolute concentration corresponding to 0.25 ppm LOQ in 5 g/l mometasone furoate glucocorticosteroid. Validation was carried out in a linear range of 0.25-10 ppm and presented a relative standard deviation (RSD) of 1.08% for system precision. Regarding IPU recovery in mometasone furoate, spiked samples produced recoveries between 96 and 109 % in the range of 0.25 to 2 ppm. Copyright © 2013 John Wiley & Sons, Ltd.
Donato, J L; Koizumi, F; Pereira, A S; Mendes, G D; De Nucci, G
2012-06-15
In the present study, a fast, sensitive and robust method to quantify dextromethorphan, dextrorphan and doxylamine in human plasma using deuterated internal standards (IS) is described. The analytes and the IS were extracted from plasma by a liquid-liquid extraction (LLE) using diethyl-ether/hexane (80/20, v/v). Extracted samples were analyzed by high performance liquid chromatography coupled to electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). Chromatographic separation was performed by pumping the mobile phase (acetonitrile/water/formic acid (90/9/1, v/v/v) during 4.0min at a flow-rate of 1.5 mL min⁻¹ into a Phenomenex Gemini® C18, 5 μm analytical column (150 × 4.6 mm i.d.). The calibration curve was linear over the range from 0.2 to 200 ng mL⁻¹ for dextromethorphan and doxylamine and 0.05 to 10 ng mL⁻¹ for dextrorphan. The intra-batch precision and accuracy (%CV) of the method ranged from 2.5 to 9.5%, and 88.9 to 105.1%, respectively. Method inter-batch precision (%CV) and accuracy ranged from 6.7 to 10.3%, and 92.2 to 107.1%, respectively. The run-time was for 4 min. The analytical procedure herein described was used to assess the pharmacokinetics of dextromethorphan, dextrorphan and doxylamine in healthy volunteers after a single oral dose of a formulation containing 30 mg of dextromethorphan hydrobromide and 12.5mg of doxylamine succinate. The method has high sensitivity, specificity and allows high throughput analysis required for a pharmacokinetic study. Copyright © 2012 Elsevier B.V. All rights reserved.
Reschiglian, P; Roda, B; Zattoni, A; Tanase, M; Marassi, V; Serani, S
2014-02-01
The rapid development of protein-based pharmaceuticals highlights the need for robust analytical methods to ensure their quality and stability. Among proteins used in pharmaceutical applications, an important and ever increasing role is represented by monoclonal antibodies and large proteins, which are often modified to enhance their activity or stability when used as drugs. The bioactivity and the stability of those proteins are closely related to the maintenance of their complex structure, which however are influenced by many external factors that can cause degradation and/or aggregation. The presence of aggregates in these drugs could reduce their bioactivity and bioavailability, and induce immunogenicity. The choice of the proper analytical method for the analysis of aggregates is fundamental to understand their (size) dimensional range, their amount, and if they are present in the sample as generated by an aggregation or as an artifact due to the method itself. Size exclusion chromatography is one of the most important techniques for the quality control of pharmaceutical proteins; however, its application is limited to relatively low molar mass aggregates. Among the techniques for the size characterization of proteins, field-flow fractionation (FFF) represents a competitive choice because of its soft mechanism due to the absence of a stationary phase and application in a broader size range, from nanometer- to micrometer-sized analytes. In this paper, the microcolumn variant of FFF, the hollow-fiber flow FFF, was online coupled with multi-angle light scattering, and a method for the characterization of aggregates with high reproducibility and low limit of detection was demonstrated employing an avidin derivate as sample model.
de Paula, Joelma Abadia Marciano; Brito, Lucas Ferreira; Caetano, Karen Lorena Ferreira Neves; de Morais Rodrigues, Mariana Cristina; Borges, Leonardo Luiz; da Conceição, Edemilson Cardoso
2016-01-01
Azadirachta indica A. Juss., also known as neem, is a Meliaceae family tree from India. It is globally known for the insecticidal properties of its limonoid tetranortriterpenoid derivatives, such as azadirachtin. This work aimed to optimize the azadirachtin ultrasound-assisted extraction (UAE) and validate the HPLC-PDA analytical method for the measurement of this marker in neem dried fruit extracts. Box-Behnken design and response surface methodology (RSM) were used to investigate the effect of process variables on the UAE. Three independent variables, including ethanol concentration (%, w/w), temperature (°C), and material-to-solvent ratio (gmL(-1)), were studied. The azadirachtin content (µgmL(-1)), i.e., dependent variable, was quantified by the HPLC-PDA analytical method. Isocratic reversed-phase chromatography was performed using acetonitrile/water (40:60), a flow of 1.0mLmin(-1), detection at 214nm, and C18 column (250×4.6mm(2), 5µm). The primary validation parameters were determined according to ICH guidelines and Brazilian legislation. The results demonstrated that the optimal UAE condition was obtained with ethanol concentration range of 75-80% (w/w), temperature of 30°C, and material-to-solvent ratio of 0.55gmL(-1). The HPLC-PDA analytical method proved to be simple, selective, linear, precise, accurate and robust. The experimental values of azadirachtin content under optimal UAE conditions were in good agreement with the RSM predicted values and were superior to the azadirachtin content of percolated extract. Such findings suggest that UAE is a more efficient extractive process in addition to being simple, fast, and inexpensive. Copyright © 2015 Elsevier B.V. All rights reserved.
Groves, Ethan; Palenik, Skip; Palenik, Christopher S
2018-04-18
While color is arguably the most important optical property of evidential fibers, the actual dyestuffs responsible for its expression in them are, in forensic trace evidence examinations, rarely analyzed and still less often identified. This is due, primarily, to the exceedingly small quantities of dye present in a single fiber as well as to the fact that dye identification is a challenging analytical problem, even when large quantities are available for analysis. Among the practical reasons for this are the wide range of dyestuffs available (and the even larger number of trade names), the low total concentration of dyes in the finished product, the limited amount of sample typically available for analysis in forensic cases, and the complexity of the dye mixtures that may exist within a single fiber. Literature on the topic of dye analysis is often limited to a specific method, subset of dyestuffs, or an approach that is not applicable given the constraints of a forensic analysis. Here, we present a generalized approach to dye identification that ( 1 ) combines several robust analytical methods, ( 2 ) is broadly applicable to a wide range of dye chemistries, application classes, and fiber types, and ( 3 ) can be scaled down to forensic casework-sized samples. The approach is based on the development of a reference collection of 300 commercially relevant textile dyes that have been characterized by a variety of microanalytical methods (HPTLC, Raman microspectroscopy, infrared microspectroscopy, UV-Vis spectroscopy, and visible microspectrophotometry). Although there is no single approach that is applicable to all dyes on every type of fiber, a combination of these analytical methods has been applied using a reproducible approach that permits the use of reference libraries to constrain the identity of and, in many cases, identify the dye (or dyes) present in a textile fiber sample.
Borowczyk, Kamila; Wyszczelska-Rokiel, Monika; Kubalczyk, Paweł; Głowacki, Rafał
2015-02-15
In this paper, we describe a simple and robust HPLC based method for determination of total low- and high-molecular-mass thiols, protein S-linked thiols and reduced albumin in plasma. The method is based on derivatization of analytes with 2-chloro-1-methylquinolinium tetrafluoroborate, separation and quantification by reversed-phase liquid chromatography followed by UV detection. Disulfides were converted to their thiol counterparts by reductive cleavage with tris(2-carboxyethyl)phosphine. Linearity in detector response for total thiols was observed over the range of 1-40 μmol L(-1) for Hcy and glutathione (GSH), 5-100 μmol L(-1) for Cys-Gly, 20-300 μmol L(-1) for Cys and 3.1-37.5 μmol L(-1) (0.2-2.4gL(-1)) for human serum albumin (HSA). For the protein S-bound forms these values were as follows: 0.5-30 μmol L(-1) for Hcy and GSH, 2.5-60 μmol L(-1) for Cys-Gly and 5-200 μmol L(-1) for Cys. The LOQs for total HSA, Cys, Hcy, Cys-Gly and GSH were 0.5, 0.2, 0.4, 0.3 and 0.4 μmol L(-1), respectively. The estimated validation parameters for all analytes are more than sufficient to allow the analytical method to be used for monitoring of the total and protein bound thiols as well as redox status of HSA in plasma. Copyright © 2015 Elsevier B.V. All rights reserved.
Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Dalton, Angela C.; Dale, Crystal
2014-06-01
Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less
The use of analytical sedimentation velocity to extract thermodynamic linkage.
Cole, James L; Correia, John J; Stafford, Walter F
2011-11-01
For 25 years, the Gibbs Conference on Biothermodynamics has focused on the use of thermodynamics to extract information about the mechanism and regulation of biological processes. This includes the determination of equilibrium constants for macromolecular interactions by high precision physical measurements. These approaches further reveal thermodynamic linkages to ligand binding events. Analytical ultracentrifugation has been a fundamental technique in the determination of macromolecular reaction stoichiometry and energetics for 85 years. This approach is highly amenable to the extraction of thermodynamic couplings to small molecule binding in the overall reaction pathway. In the 1980s this approach was extended to the use of sedimentation velocity techniques, primarily by the analysis of tubulin-drug interactions by Na and Timasheff. This transport method necessarily incorporates the complexity of both hydrodynamic and thermodynamic nonideality. The advent of modern computational methods in the last 20 years has subsequently made the analysis of sedimentation velocity data for interacting systems more robust and rigorous. Here we review three examples where sedimentation velocity has been useful at extracting thermodynamic information about reaction stoichiometry and energetics. Approaches to extract linkage to small molecule binding and the influence of hydrodynamic nonideality are emphasized. These methods are shown to also apply to the collection of fluorescence data with the new Aviv FDS. Copyright © 2011 Elsevier B.V. All rights reserved.
The use of analytical sedimentation velocity to extract thermodynamic linkage
Cole, James L.; Correia, John J.; Stafford, Walter F.
2011-01-01
For 25 years, the Gibbs Conference on Biothermodynamics has focused on the use of thermodynamics to extract information about the mechanism and regulation of biological processes. This includes the determination of equilibrium constants for macromolecular interactions by high precision physical measurements. These approaches further reveal thermodynamic linkages to ligand binding events. Analytical ultracentrifugation has been a fundamental technique in the determination of macromolecular reaction stoichiometry and energetics for 85 years. This approach is highly amenable to the extraction of thermodynamic couplings to small molecule binding in the overall reaction pathway. In the 1980’s this approach was extended to the use of sedimentation velocity techniques, primarily by the analysis of tubulin-drug interactions by Na and Timasheff. This transport method necessarily incorporates the complexity of both hydrodynamic and thermodynamic nonideality. The advent of modern computational methods in the last 20 years has subsequently made the analysis of sedimentation velocity data for interacting systems more robust and rigorous. Here we review three examples where sedimentation velocity has been useful at extracting thermodynamic information about reaction stoichiometry and energetics. Approaches to extract linkage to small molecule binding and the influence of hydrodynamic nonideality are emphasized. These methods are shown to also apply to the collection of fluorescence data with the new Aviv FDS. PMID:21703752
Watson, Douglas S; Kerchner, Kristi R; Gant, Sean S; Pedersen, Joseph W; Hamburger, James B; Ortigosa, Allison D; Potgieter, Thomas I
2016-01-01
Tangential flow microfiltration (MF) is a cost-effective and robust bioprocess separation technique, but successful full scale implementation is hindered by the empirical, trial-and-error nature of scale-up. We present an integrated approach leveraging at-line process analytical technology (PAT) and mass balance based modeling to de-risk MF scale-up. Chromatography-based PAT was employed to improve the consistency of an MF step that had been a bottleneck in the process used to manufacture a therapeutic protein. A 10-min reverse phase ultra high performance liquid chromatography (RP-UPLC) assay was developed to provide at-line monitoring of protein concentration. The method was successfully validated and method performance was comparable to previously validated methods. The PAT tool revealed areas of divergence from a mass balance-based model, highlighting specific opportunities for process improvement. Adjustment of appropriate process controls led to improved operability and significantly increased yield, providing a successful example of PAT deployment in the downstream purification of a therapeutic protein. The general approach presented here should be broadly applicable to reduce risk during scale-up of filtration processes and should be suitable for feed-forward and feed-back process control. © 2015 American Institute of Chemical Engineers.
Bonfilio, Rudy; Tarley, César Ricardo Teixeira; Pereira, Gislaine Ribeiro; Salgado, Hérida Regina Nunes; de Araújo, Magali Benjamim
2009-11-15
This paper describes the optimization and validation of an analytical methodology for the determination of losartan potassium in capsules by HPLC using 2(5-1) fractional factorial and Doehlert designs. This multivariate approach allows a considerable improvement in chromatographic performance using fewer experiments, without additional cost for columns or other equipment. The HPLC method utilized potassium phosphate buffer (pH 6.2; 58 mmol L(-1))-acetonitrile (65:35, v/v) as the mobile phase, pumped at a flow rate of 1.0 mL min(-1). An octylsilane column (100 mm x 4.6mm i.d., 5 microm) maintained at 35 degrees C was used as the stationary phase. UV detection was performed at 254 nm. The method was validated according to the ICH guidelines, showing accuracy, precision (intra-day relative standard deviation (R.S.D.) and inter-day R.S.D values <2.0%), selectivity, robustness and linearity (r=0.9998) over a concentration range from 30 to 70 mg L(-1) of losartan potassium. The limits of detection and quantification were 0.114 and 0.420 mg L(-1), respectively. The validated method may be used to quantify losartan potassium in capsules and to determine the stability of this drug.
Guale, Fessessework; Shahreza, Shahriar; Walterscheid, Jeffrey P.; Chen, Hsin-Hung; Arndt, Crystal; Kelly, Anna T.; Mozayani, Ashraf
2013-01-01
Liquid chromatography time-of-flight mass spectrometry (LC–TOF-MS) analysis provides an expansive technique for identifying many known and unknown analytes. This study developed a screening method that utilizes automated solid-phase extraction to purify a wide array of analytes involving stimulants, benzodiazepines, opiates, muscle relaxants, hypnotics, antihistamines, antidepressants and newer synthetic “Spice/K2” cannabinoids and cathinone “bath salt” designer drugs. The extract was applied to LC–TOF-MS analysis, implementing a 13 min chromatography gradient with mobile phases of ammonium formate and methanol using positive mode electrospray. Several common drugs and metabolites can share the same mass and chemical formula among unrelated compounds, but they are structurally different. In this method, the LC–TOF-MS was able to resolve many isobaric compounds by accurate mass correlation within 15 ppm mass units and a narrow retention time interval of less than 10 s of separation. Drug recovery yields varied among spiked compounds, but resulted in overall robust area counts to deliver an average match score of 86 when compared to the retention time and mass of authentic standards. In summary, this method represents a rapid, enhanced screen for blood and urine specimens in postmortem, driving under the influence, and drug facilitated sexual assault forensic toxicology casework. PMID:23118149
Guale, Fessessework; Shahreza, Shahriar; Walterscheid, Jeffrey P; Chen, Hsin-Hung; Arndt, Crystal; Kelly, Anna T; Mozayani, Ashraf
2013-01-01
Liquid chromatography time-of-flight mass spectrometry (LC-TOF-MS) analysis provides an expansive technique for identifying many known and unknown analytes. This study developed a screening method that utilizes automated solid-phase extraction to purify a wide array of analytes involving stimulants, benzodiazepines, opiates, muscle relaxants, hypnotics, antihistamines, antidepressants and newer synthetic "Spice/K2" cannabinoids and cathinone "bath salt" designer drugs. The extract was applied to LC-TOF-MS analysis, implementing a 13 min chromatography gradient with mobile phases of ammonium formate and methanol using positive mode electrospray. Several common drugs and metabolites can share the same mass and chemical formula among unrelated compounds, but they are structurally different. In this method, the LC-TOF-MS was able to resolve many isobaric compounds by accurate mass correlation within 15 ppm mass units and a narrow retention time interval of less than 10 s of separation. Drug recovery yields varied among spiked compounds, but resulted in overall robust area counts to deliver an average match score of 86 when compared to the retention time and mass of authentic standards. In summary, this method represents a rapid, enhanced screen for blood and urine specimens in postmortem, driving under the influence, and drug facilitated sexual assault forensic toxicology casework.
Nonlinear estimation for arrays of chemical sensors
NASA Astrophysics Data System (ADS)
Yosinski, Jason; Paffenroth, Randy
2010-04-01
Reliable detection of hazardous materials is a fundamental requirement of any national security program. Such materials can take a wide range of forms including metals, radioisotopes, volatile organic compounds, and biological contaminants. In particular, detection of hazardous materials in highly challenging conditions - such as in cluttered ambient environments, where complex collections of analytes are present, and with sensors lacking specificity for the analytes of interest - is an important part of a robust security infrastructure. Sophisticated single sensor systems provide good specificity for a limited set of analytes but often have cumbersome hardware and environmental requirements. On the other hand, simple, broadly responsive sensors are easily fabricated and efficiently deployed, but such sensors individually have neither the specificity nor the selectivity to address analyte differentiation in challenging environments. However, arrays of broadly responsive sensors can provide much of the sensitivity and selectivity of sophisticated sensors but without the substantial hardware overhead. Unfortunately, arrays of simple sensors are not without their challenges - the selectivity of such arrays can only be realized if the data is first distilled using highly advanced signal processing algorithms. In this paper we will demonstrate how the use of powerful estimation algorithms, based on those commonly used within the target tracking community, can be extended to the chemical detection arena. Herein our focus is on algorithms that not only provide accurate estimates of the mixture of analytes in a sample, but also provide robust measures of ambiguity, such as covariances.
da Rosa, Hemerson S; Koetz, Mariana; Santos, Marí Castro; Jandrey, Elisa Helena Farias; Folmer, Vanderlei; Henriques, Amélia Teresinha; Mendez, Andreas Sebastian Loureiro
2018-04-01
Sida tuberculata (ST) is a Malvaceae species widely distributed in Southern Brazil. In traditional medicine, ST has been employed as hypoglycemic, hypocholesterolemic, anti-inflammatory and antimicrobial. Additionally, this species is chemically characterized by flavonoids, alkaloids and phytoecdysteroids mainly. The present work aimed to optimize the extractive technique and to validate an UHPLC method for the determination of 20-hydroxyecdsone (20HE) in the ST leaves. Box-Behnken Design (BBD) was used in method optimization. The extractive methods tested were: static and dynamic maceration, ultrasound, ultra-turrax and reflux. In the Box-Behnken three parameters were evaluated in three levels (-1, 0, +1), particle size, time and plant:solvent ratio. In validation method, the parameters of selectivity, specificity, linearity, limits of detection and quantification (LOD, LOQ), precision, accuracy and robustness were evaluated. The results indicate static maceration as better technique to obtain 20HE peak area in ST extract. The optimal extraction from surface response methodology was achieved with the parameters granulometry of 710 nm, 9 days of maceration and plant:solvent ratio 1:54 (w/v). The UHPLC-PDA analytical developed method showed full viability of performance, proving to be selective, linear, precise, accurate and robust for 20HE detection in ST leaves. The average content of 20HE was 0.56% per dry extract. Thus, the optimization of extractive method in ST leaves increased the concentration of 20HE in crude extract, and a reliable method was successfully developed according to validation requirements and in agreement with current legislation. Copyright © 2018 Elsevier Inc. All rights reserved.
Rackiewicz, Michal; Große-Hovest, Ludger; Alpert, Andrew J; Zarei, Mostafa; Dengjel, Jörn
2017-06-02
Hydrophobic interaction chromatography (HIC) is a robust standard analytical method to purify proteins while preserving their biological activity. It is widely used to study post-translational modifications of proteins and drug-protein interactions. In the current manuscript we employed HIC to separate proteins, followed by bottom-up LC-MS/MS experiments. We used this approach to fractionate antibody species followed by comprehensive peptide mapping as well as to study protein complexes in human cells. HIC-reversed-phase chromatography (RPC)-mass spectrometry (MS) is a powerful alternative to fractionate proteins for bottom-up proteomics experiments making use of their distinct hydrophobic properties.
An exact solution for ideal dam-break floods on steep slopes
Ancey, C.; Iverson, R.M.; Rentschler, M.; Denlinger, R.P.
2008-01-01
The shallow-water equations are used to model the flow resulting from the sudden release of a finite volume of frictionless, incompressible fluid down a uniform slope of arbitrary inclination. The hodograph transformation and Riemann's method make it possible to transform the governing equations into a linear system and then deduce an exact analytical solution expressed in terms of readily evaluated integrals. Although the solution treats an idealized case never strictly realized in nature, it is uniquely well-suited for testing the robustness and accuracy of numerical models used to model shallow-water flows on steep slopes. Copyright 2008 by the American Geophysical Union.
Supercritical fluid chromatography: a promising alternative to current bioanalytical techniques.
Dispas, Amandine; Jambo, Hugues; André, Sébastien; Tyteca, Eva; Hubert, Philippe
2018-01-01
During the last years, chemistry was involved in the worldwide effort toward environmental problems leading to the birth of green chemistry. In this context, green analytical tools were developed as modern Supercritical Fluid Chromatography in the field of separative techniques. This chromatographic technique knew resurgence a few years ago, thanks to its high efficiency, fastness and robustness of new generation equipment. These advantages and its easy hyphenation to MS fulfill the requirements of bioanalysis regarding separation capacity and high throughput. In the present paper, the technical aspects focused on bioanalysis specifications will be detailed followed by a critical review of bioanalytical supercritical fluid chromatography methods published in the literature.
Identification of novel peptides for horse meat speciation in highly processed foodstuffs.
Claydon, Amy J; Grundy, Helen H; Charlton, Adrian J; Romero, M Rosario
2015-01-01
There is a need for robust analytical methods to support enforcement of food labelling legislation. Proteomics is emerging as a complementary methodology to existing tools such as DNA and antibody-based techniques. Here we describe the development of a proteomics strategy for the determination of meat species in highly processed foods. A database of specific peptides for nine relevant animal species was used to enable semi-targeted species determination. This principle was tested for horse meat speciation, and a range of horse-specific peptides were identified as heat stable marker peptides for the detection of low levels of horse meat in mixtures with other species.
Quantile regression in the presence of monotone missingness with sensitivity analysis
Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.
2016-01-01
In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008
Sotelo, Julio; Urbina, Jesús; Valverde, Israel; Mura, Joaquín; Tejos, Cristián; Irarrazaval, Pablo; Andia, Marcelo E; Hurtado, Daniel E; Uribe, Sergio
2018-01-01
We propose a 3D finite-element method for the quantification of vorticity and helicity density from 3D cine phase-contrast (PC) MRI. By using a 3D finite-element method, we seamlessly estimate velocity gradients in 3D. The robustness and convergence were analyzed using a combined Poiseuille and Lamb-Ossen equation. A computational fluid dynamics simulation was used to compared our method with others available in the literature. Additionally, we computed 3D maps for different 3D cine PC-MRI data sets: phantom without and with coarctation (18 healthy volunteers and 3 patients). We found a good agreement between our method and both the analytical solution of the combined Poiseuille and Lamb-Ossen. The computational fluid dynamics results showed that our method outperforms current approaches to estimate vorticity and helicity values. In the in silico model, we observed that for a tetrahedral element of 2 mm of characteristic length, we underestimated the vorticity in less than 5% with respect to the analytical solution. In patients, we found higher values of helicity density in comparison to healthy volunteers, associated with vortices in the lumen of the vessels. We proposed a novel method that provides entire 3D vorticity and helicity density maps, avoiding the used of reformatted 2D planes from 3D cine PC-MRI. Magn Reson Med 79:541-553, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Assessment of active methods for removal of LEO debris
NASA Astrophysics Data System (ADS)
Hakima, Houman; Emami, M. Reza
2018-03-01
This paper investigates the applicability of five active methods for removal of large low Earth orbit debris. The removal methods, namely net, laser, electrodynamic tether, ion beam shepherd, and robotic arm, are selected based on a set of high-level space mission constraints. Mission level criteria are then utilized to assess the performance of each redirection method in light of the results obtained from a Monte Carlo simulation. The simulation provides an insight into the removal time, performance robustness, and propellant mass criteria for the targeted debris range. The remaining attributes are quantified based on the models provided in the literature, which take into account several important parameters pertaining to each removal method. The means of assigning attributes to each assessment criterion is discussed in detail. A systematic comparison is performed using two different assessment schemes: Analytical Hierarchy Process and utility-based approach. A third assessment technique, namely the potential-loss analysis, is utilized to highlight the effect of risks in each removal methods.
Sert, Şenol
2013-07-01
A comparison method for the determination (without sample pre-concentration) of uranium in ore by inductively coupled plasma optical emission spectrometry (ICP-OES) has been performed. The experiments were conducted using three procedures: matrix matching, plasma optimization, and internal standardization for three emission lines of uranium. Three wavelengths of Sm were tested as internal standard for the internal standardization method. The robust conditions were evaluated using applied radiofrequency power, nebulizer argon gas flow rate, and sample uptake flow rate by considering the intensity ratio of the Mg(II) 280.270 nm and Mg(I) 285.213 nm lines. Analytical characterization of method was assessed by limit of detection and relative standard deviation values. The certificated reference soil sample IAEA S-8 was analyzed, and the uranium determination at 367.007 nm with internal standardization using Sm at 359.260 nm has been shown to improve accuracy compared with other methods. The developed method was used for real uranium ore sample analysis.
Xia, Yidong; Luo, Hong; Frisbey, Megan; ...
2014-07-01
A set of implicit methods are proposed for a third-order hierarchical WENO reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids. An attractive feature in these methods are the application of the Jacobian matrix based on the P1 element approximation, resulting in a huge reduction of memory requirement compared with DG (P2). Also, three approaches -- analytical derivation, divided differencing, and automatic differentiation (AD) are presented to construct the Jacobian matrix respectively, where the AD approach shows the best robustness. A variety of compressible flow problems are computed to demonstrate the fast convergence property of the implemented flowmore » solver. Furthermore, an SPMD (single program, multiple data) programming paradigm based on MPI is proposed to achieve parallelism. The numerical results on complex geometries indicate that this low-storage implicit method can provide a viable and attractive DG solution for complicated flows of practical importance.« less
A simple method for the enrichment of bisphenols using boron nitride.
Fischnaller, Martin; Bakry, Rania; Bonn, Günther K
2016-03-01
A simple solid-phase extraction method for the enrichment of 5 bisphenol derivatives using hexagonal boron nitride (BN) was developed. BN was applied to concentrate bisphenol derivatives in spiked water samples and the compounds were analyzed using HPLC coupled to fluorescence detection. The effect of pH and organic solvents on the extraction efficiency was investigated. An enrichment factor up to 100 was achieved without evaporation and reconstitution. The developed method was applied for the determination of bisphenol A migrated from some polycarbonate plastic products. Furthermore, bisphenol derivatives were analyzed in spiked and non-spiked canned food and beverages. None of the analyzed samples exceeded the migration limit set by the European Union of 0.6mg/kg food. The method showed good recovery rates ranging from 80% to 110%. Validation of the method was performed in terms of accuracy and precision. The applied method is robust, fast, efficient and easily adaptable to different analytical problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nonlinear dynamics of mini-satellite respinup by weak internal controllable torques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somov, Yevgeny, E-mail: e-somov@mail.ru
Contemporary space engineering advanced new problem before theoretical mechanics and motion control theory: a spacecraft directed respinup by the weak restricted control internal forces. The paper presents some results on this problem, which is very actual for energy supply of information mini-satellites (for communication, geodesy, radio- and opto-electronic observation of the Earth et al.) with electro-reaction plasma thrusters and gyro moment cluster based on the reaction wheels or the control moment gyros. The solution achieved is based on the methods for synthesis of nonlinear robust control and on rigorous analytical proof for the required spacecraft rotation stability by Lyapunov functionmore » method. These results were verified by a computer simulation of strongly nonlinear oscillatory processes at respinuping of a flexible spacecraft.« less
Chen, Feng; Hu, Zhe-Yi; Laizure, S Casey; Hudson, Joanna Q
2017-03-01
Optimal dosing of antibiotics in critically ill patients is complicated by the development of resistant organisms requiring treatment with multiple antibiotics and alterations in systemic exposure due to diseases and extracorporeal drug removal. Developing guidelines for optimal antibiotic dosing is an important therapeutic goal requiring robust analytical methods to simultaneously measure multiple antibiotics. An LC-MS/MS assay using protein precipitation for cleanup followed by a 6-min gradient separation was developed to simultaneously determine five antibiotics in human plasma. The precision and accuracy were within the 15% acceptance range. The formic acid concentration was an important determinant of signal intensity, peak shape and matrix effects. The method was designed to be simple and successfully applied to a clinical pharmacokinetic study.
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
Application of the boundary element method to the micromechanical analysis of composite materials
NASA Technical Reports Server (NTRS)
Goldberg, R. K.; Hopkins, D. A.
1995-01-01
A new boundary element formulation for the micromechanical analysis of composite materials is presented in this study. A unique feature of the formulation is the use of circular shape functions to convert the two-dimensional integrations of the composite fibers to one-dimensional integrations. To demonstrate the applicability of the formulations, several example problems including elastic and thermal analysis of laminated composites and elastic analyses of woven composites are presented and the boundary element results compared to experimental observations and/or results obtained through alternate analytical procedures. While several issues remain to be addressed in order to make the methodology more robust, the formulations presented here show the potential in providing an alternative to traditional finite element methods, particularly for complex composite architectures.
Method and system to perform energy-extraction based active noise control
NASA Technical Reports Server (NTRS)
Kelkar, Atul (Inventor); Joshi, Suresh M. (Inventor)
2009-01-01
A method to provide active noise control to reduce noise and vibration in reverberant acoustic enclosures such as aircraft, vehicles, appliances, instruments, industrial equipment and the like is presented. A continuous-time multi-input multi-output (MIMO) state space mathematical model of the plant is obtained via analytical modeling and system identification. Compensation is designed to render the mathematical model passive in the sense of mathematical system theory. The compensated system is checked to ensure robustness of the passive property of the plant. The check ensures that the passivity is preserved if the mathematical model parameters are perturbed from nominal values. A passivity-based controller is designed and verified using numerical simulations and then tested. The controller is designed so that the resulting closed-loop response shows the desired noise reduction.
Optical asymmetric cryptography based on amplitude reconstruction of elliptically polarized light
NASA Astrophysics Data System (ADS)
Cai, Jianjun; Shen, Xueju; Lei, Ming
2017-11-01
We propose a novel optical asymmetric image encryption method based on amplitude reconstruction of elliptically polarized light, which is free from silhouette problem. The original image is analytically separated into two phase-only masks firstly, and then the two masks are encoded into amplitudes of the orthogonal polarization components of an elliptically polarized light. Finally, the elliptically polarized light propagates through a linear polarizer, and the output intensity distribution is recorded by a CCD camera to obtain the ciphertext. The whole encryption procedure could be implemented by using commonly used optical elements, and it combines diffusion process and confusion process. As a result, the proposed method achieves high robustness against iterative-algorithm-based attacks. Simulation results are presented to prove the validity of the proposed cryptography.
NASA Astrophysics Data System (ADS)
Setty, Srinivas J.; Cefola, Paul J.; Montenbruck, Oliver; Fiedler, Hauke
2016-05-01
Catalog maintenance for Space Situational Awareness (SSA) demands an accurate and computationally lean orbit propagation and orbit determination technique to cope with the ever increasing number of observed space objects. As an alternative to established numerical and analytical methods, we investigate the accuracy and computational load of the Draper Semi-analytical Satellite Theory (DSST). The standalone version of the DSST was enhanced with additional perturbation models to improve its recovery of short periodic motion. The accuracy of DSST is, for the first time, compared to a numerical propagator with fidelity force models for a comprehensive grid of low, medium, and high altitude orbits with varying eccentricity and different inclinations. Furthermore, the run-time of both propagators is compared as a function of propagation arc, output step size and gravity field order to assess its performance for a full range of relevant use cases. For use in orbit determination, a robust performance of DSST is demonstrated even in the case of sparse observations, which is most sensitive to mismodeled short periodic perturbations. Overall, DSST is shown to exhibit adequate accuracy at favorable computational speed for the full set of orbits that need to be considered in space surveillance. Along with the inherent benefits of a semi-analytical orbit representation, DSST provides an attractive alternative to the more common numerical orbit propagation techniques.
NASA Astrophysics Data System (ADS)
Guerrini, Luca; Rodriguez-Loureiro, Ignacio; Correa-Duarte, Miguel A.; Lee, Yih Hong; Ling, Xing Yi; García de Abajo, F. Javier; Alvarez-Puebla, Ramon A.
2014-06-01
Chemical speciation of heavy metals has become extremely important in environmental and analytical research because of the strong dependence that toxicity, environmental mobility, persistence and bioavailability of these pollutants have on their specific chemical forms. Novel nano-optical-based detection strategies, capable of overcoming the intrinsic limitations of well-established analytic methods for the quantification of total metal ion content, have been reported, but the speciation of different chemical forms has not yet been achieved. Here, we report the first example of a SERS-based sensor for chemical speciation of toxic metal ions in water at trace levels. Specifically, the inorganic Hg2+ and the more toxicologically relevant methylmercury (CH3Hg+) are selected as analytical targets. The sensing platform consists of a self-assembled monolayer of 4-mercaptopyridine (MPY) on highly SERS-active and robust hybrid plasmonic materials formed by a dense layer of interacting gold nanoparticles anchored onto polystyrene microbeads. The co-ordination of Hg2+ and CH3Hg+ to the nitrogen atom of the MPY ring yields characteristic changes in the vibrational SERS spectra of the organic chemoreceptor that can be qualitatively and quantitatively correlated to the presence of the two different mercury forms.Chemical speciation of heavy metals has become extremely important in environmental and analytical research because of the strong dependence that toxicity, environmental mobility, persistence and bioavailability of these pollutants have on their specific chemical forms. Novel nano-optical-based detection strategies, capable of overcoming the intrinsic limitations of well-established analytic methods for the quantification of total metal ion content, have been reported, but the speciation of different chemical forms has not yet been achieved. Here, we report the first example of a SERS-based sensor for chemical speciation of toxic metal ions in water at trace levels. Specifically, the inorganic Hg2+ and the more toxicologically relevant methylmercury (CH3Hg+) are selected as analytical targets. The sensing platform consists of a self-assembled monolayer of 4-mercaptopyridine (MPY) on highly SERS-active and robust hybrid plasmonic materials formed by a dense layer of interacting gold nanoparticles anchored onto polystyrene microbeads. The co-ordination of Hg2+ and CH3Hg+ to the nitrogen atom of the MPY ring yields characteristic changes in the vibrational SERS spectra of the organic chemoreceptor that can be qualitatively and quantitatively correlated to the presence of the two different mercury forms. Electronic supplementary information (ESI) available: Representative TEM and ESEM images of AuNPs and PS@Au particles. Optical extinction spectra of AuNPs and PS@Au suspensions. SERS spectra of unmodified PS@Au suspension before and after the addition of CH3Hg+. SERS spectra of PS@Au-MPY upon addition of several metal solutions. Detailed SERS study of the MPY response to high concentration of CH3Hg+. See DOI: 10.1039/c4nr01464b
Automated determination of arterial input function for DCE-MRI of the prostate
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep
2011-03-01
Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.
Bohm, Detlef A; Stachel, Carolin S; Gowik, Petra
2012-07-01
The presented multi-method was developed for the confirmation of 37 antibiotic substances from the six antibiotic groups: macrolides, lincosamides, quinolones, tetracyclines, pleuromutilines and diamino-pyrimidine derivatives. All substances were analysed simultaneously in a single analytical run with the same procedure, including an extraction with buffer, a clean-up by solid-phase extraction, and the measurement by liquid chromatography tandem mass spectrometry in ESI+ mode. The method was validated on the basis of an in-house validation concept with factorial design by combination of seven factors to check the robustness in a concentration range of 5-50 μg kg(-1). The honeys used were of different types with regard to colour and origin. The values calculated for the validation parameters-decision limit CCα (range, 7.5-12.9 μg kg(-1)), detection capability CCβ (range, 9.4-19.9 μg kg(-1)), within-laboratory reproducibility RSD(wR) (<20% except for tulathromycin with 23.5% and tylvalosin with 21.4 %), repeatability RSD(r) (<20% except for tylvalosin with 21.1%), and recovery (range, 92-106%)-were acceptable and in agreement with the criteria of Commission Decision 2002/657/EC. The validation results showed that the method was applicable for the residue analysis of antibiotics in honey to substances with and without recommended concentrations, although some changes had been tested during validation to determine the robustness of the method.
Seismic instantaneous frequency extraction based on the SST-MAW
NASA Astrophysics Data System (ADS)
Liu, Naihao; Gao, Jinghuai; Jiang, Xiudi; Zhang, Zhuosheng; Wang, Ping
2018-06-01
The instantaneous frequency (IF) extraction of seismic data has been widely applied to seismic exploration for decades, such as detecting seismic absorption and characterizing depositional thicknesses. Based on the complex-trace analysis, the Hilbert transform (HT) can extract the IF directly, which is a traditional method and susceptible to noise. In this paper, a robust approach based on the synchrosqueezing transform (SST) is proposed to extract the IF from seismic data. In this process, a novel analytical wavelet is developed and chosen as the basic wavelet, which is called the modified analytical wavelet (MAW) and comes from the three parameter wavelet. After transforming the seismic signal into a sparse time-frequency domain via the SST taking the MAW (SST-MAW), an adaptive threshold is introduced to improve the noise immunity and accuracy of the IF extraction in a noisy environment. Note that the SST-MAW reconstructs a complex trace to extract seismic IF. To demonstrate the effectiveness of the proposed method, we apply the SST-MAW to synthetic data and field seismic data. Numerical experiments suggest that the proposed procedure yields the higher resolution and the better anti-noise performance compared to the conventional IF extraction methods based on the HT method and continuous wavelet transform. Moreover, geological features (such as the channels) are well characterized, which is insightful for further oil/gas reservoir identification.
Stanley, Shawn M R; Foo, Hsiao Ching
2006-05-19
A rapid, selective and robust direct-injection LC/hybrid tandem MS method has been developed for simultaneous screening of more than 250 basic drugs in the supernatant of enzyme hydrolysed equine urine. Analytes, trapped using a short HLB extraction column, are refocused and separated on a Sunfire C(18) analytical column using a controlled differential gradient generated by proportional dilution of the first column's eluent with water. Independent data acquisition (IDA) was configured to trigger a sensitive enhanced product ion (EPI) scan when a multiple reaction monitoring (MRM) survey scan signal exceeded the defined criteria. The decision on whether or not to report a sample as a positive result was based upon both the presence of a MRM response within the correct retention time range and a qualitative match between the EPI spectrum obtained and the corresponding reference standard. Ninety seven percent of the drugs targeted by this method met our detection criteria when spiked into urine at 100 ng/ml; 199 were found at 10 ng/ml, 83 at 1 ng/ml and 4 at 0.1 ng/ml.
A novel control algorithm for interaction between surface waves and a permeable floating structure
NASA Astrophysics Data System (ADS)
Tsai, Pei-Wei; Alsaedi, A.; Hayat, T.; Chen, Cheng-Wu
2016-04-01
An analytical solution is undertaken to describe the wave-induced flow field and the surge motion of a permeable platform structure with fuzzy controllers in an oceanic environment. In the design procedure of the controller, a parallel distributed compensation (PDC) scheme is utilized to construct a global fuzzy logic controller by blending all local state feedback controllers. A stability analysis is carried out for a real structure system by using Lyapunov method. The corresponding boundary value problems are then incorporated into scattering and radiation problems. They are analytically solved, based on separation of variables, to obtain series solutions in terms of the harmonic incident wave motion and surge motion. The dependence of the wave-induced flow field and its resonant frequency on wave characteristics and structure properties including platform width, thickness and mass has been thus drawn with a parametric approach. From which mathematical models are applied for the wave-induced displacement of the surge motion. A nonlinearly inverted pendulum system is employed to demonstrate that the controller tuned by swarm intelligence method can not only stabilize the nonlinear system, but has the robustness against external disturbance.
NASA Astrophysics Data System (ADS)
Wang, Xun; Ghidaoui, Mohamed S.
2018-07-01
This paper considers the problem of identifying multiple leaks in a water-filled pipeline based on inverse transient wave theory. The analytical solution to this problem involves nonlinear interaction terms between the various leaks. This paper shows analytically and numerically that these nonlinear terms are of the order of the leak sizes to the power two and; thus, negligible. As a result of this simplification, a maximum likelihood (ML) scheme that identifies leak locations and leak sizes separately is formulated and tested. It is found that the ML estimation scheme is highly efficient and robust with respect to noise. In addition, the ML method is a super-resolution leak localization scheme because its resolvable leak distance (approximately 0.15λmin , where λmin is the minimum wavelength) is below the Nyquist-Shannon sampling theorem limit (0.5λmin). Moreover, the Cramér-Rao lower bound (CRLB) is derived and used to show the efficiency of the ML scheme estimates. The variance of the ML estimator approximates the CRLB proving that the ML scheme belongs to class of best unbiased estimator of leak localization methods.
Drewes, J E; Anderson, P; Denslow, N; Olivieri, A; Schlenk, D; Snyder, S A; Maruya, K A
2013-01-01
This study discussed a proposed process to prioritize chemicals for reclaimed water monitoring programs, selection of analytical methods required for their quantification, toxicological relevance of chemicals of emerging concern regarding human health, and related issues. Given that thousands of chemicals are potentially present in reclaimed water and that information about those chemicals is rapidly evolving, a transparent, science-based framework was developed to guide prioritization of which compounds of emerging concern (CECs) should be included in reclaimed water monitoring programs. The recommended framework includes four steps: (1) compile environmental concentrations (e.g., measured environmental concentration or MEC) of CECs in the source water for reuse projects; (2) develop a monitoring trigger level (MTL) for each of these compounds (or groups thereof) based on toxicological relevance; (3) compare the environmental concentration (e.g., MEC) to the MTL; CECs with a MEC/MTL ratio greater than 1 should be prioritized for monitoring, compounds with a ratio less than '1' should only be considered if they represent viable treatment process performance indicators; and (4) screen the priority list to ensure that a commercially available robust analytical method is available for that compound.
Ullrich, Sebastian; Neef, Sylvia K; Schmarr, Hans-Georg
2018-02-01
Low-molecular-weight volatile sulfur compounds such as thiols, sulfides, disulfides as well as thioacetates cause a sulfidic off-flavor in wines even at low concentration levels. The proposed analytical method for quantification of these compounds in wine is based on headspace solid-phase microextraction, followed by gas chromatographic analysis with sulfur-specific detection using a pulsed flame photometric detector. Robust quantification was achieved via a stable isotope dilution assay using commercial and synthesized deuterated isotopic standards. The necessary chromatographic separation of analytes and isotopic standards benefits from the inverse isotope effect realized on an apolar polydimethylsiloxane stationary phase of increased film thickness. Interferences with sulfur-specific detection in wine caused by sulfur dioxide were minimized by addition of propanal. The method provides adequate validation data, with good repeatability and limits of detection and quantification. It suits the requirements of wine quality management, allowing the control of oenological treatments to counteract an eventual formation of excessively high concentration of such malodorous compounds. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Huang, Qiong; Bu, Tong; Zhang, Wentao; Yan, Lingzhi; Zhang, Mengyue; Yang, Qingfeng; Huang, Lunjie; Yang, Baowei; Hu, Na; Suo, Yourui; Wang, Jianlong; Zhang, Daohong
2018-10-01
Immunochromatographic assays (ICAs) are most frequently used for on-site rapid screening of clenbuterol. To improve sensitivity, a novel probe with bacteria as signal carriers was developed. Bacteria can load a great deal of gold nanoparticles (AuNPs) on their surface, meaning much fewer antibodies are needed to produce clearly visible results, although low concentrations of antibody could also trigger fierce competition between free analyte and the immobilized antigen. Thus, a limited number of antibodies was key to significantly improved sensitivity. Analytical conditions, including bacterial species, coupling method, and concentration, were optimized. The visual detection limit (VDL) for clenbuterol was 0.1 ng/mL, a 20-fold improvement in sensitivity compared with traditional strips. This work has opened up a new route for signal amplification and improved performance of ICAs. Furthermore, inactivated bacteria could also be environment-friendly and robust signal carriers for other biosensors. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kaus, B.; Popov, A.
2015-12-01
The analytical expression for the Jacobian is a key component to achieve fast and robust convergence of the nonlinear Newton-Raphson iterative solver. Accomplishing this task in practice often requires a significant algebraic effort. Therefore it is quite common to use a cheap alternative instead, for example by approximating the Jacobian with a finite difference estimation. Despite its simplicity it is a relatively fragile and unreliable technique that is sensitive to the scaling of the residual and unknowns, as well as to the perturbation parameter selection. Unfortunately no universal rule can be applied to provide both a robust scaling and a perturbation. The approach we use here is to derive the analytical Jacobian for the coupled set of momentum, mass, and energy conservation equations together with the elasto-visco-plastic rheology and a marker in cell/staggered finite difference method. The software project LaMEM (Lithosphere and Mantle Evolution Model) is primarily developed for the thermo-mechanically coupled modeling of the 3D lithospheric deformation. The code is based on a staggered grid finite difference discretization in space, and uses customized scalable solvers form PETSc library to efficiently run on the massively parallel machines (such as IBM Blue Gene/Q). Currently LaMEM relies on the Jacobian-Free Newton-Krylov (JFNK) nonlinear solver, which approximates the Jacobian-vector product using a simple finite difference formula. This approach never requires an assembled Jacobian matrix and uses only the residual computation routine. We use an approximate Jacobian (Picard) matrix to precondition the Krylov solver with the Galerkin geometric multigrid. Because of the inherent problems of the finite difference Jacobian estimation, this approach doesn't always result in stable convergence. In this work we present and discuss a matrix-free technique in which the Jacobian-vector product is replaced by analytically-derived expressions and compare results with those obtained with a finite difference approximation of the Jacobian. This project is funded by ERC Starting Grant 258830 and computer facilities were provided by Jülich supercomputer center (Germany).
Adjustment of Pesticide Concentrations for Temporal Changes in Analytical Recovery, 1992-2006
Martin, Jeffrey D.; Stone, Wesley W.; Wydoski, Duane S.; Sandstrom, Mark W.
2009-01-01
Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ('spiked' QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report examines temporal changes in the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as 'pesticides') that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 to 2006 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Temporal changes in pesticide recovery were investigated by calculating robust, locally weighted scatterplot smooths (lowess smooths) for the time series of pesticide recoveries in 5,132 laboratory reagent spikes; 1,234 stream-water matrix spikes; and 863 groundwater matrix spikes. A 10-percent smoothing window was selected to show broad, 6- to 12-month time scale changes in recovery for most of the 52 pesticides. Temporal patterns in recovery were similar (in phase) for laboratory reagent spikes and for matrix spikes for most pesticides. In-phase temporal changes among spike types support the hypothesis that temporal change in method performance is the primary cause of temporal change in recovery. Although temporal patterns of recovery were in phase for most pesticides, recovery in matrix spikes was greater than recovery in reagent spikes for nearly every pesticide. Models of recovery based on matrix spikes are deemed more appropriate for adjusting concentrations of pesticides measured in groundwater and stream-water samples than models based on laboratory reagent spikes because (1) matrix spikes are expected to more closely match the matrix of environmental water samples than are reagent spikes and (2) method performance is often matrix dependent, as was shown by higher recovery in matrix spikes for most of the pesticides. Models of recovery, based on lowess smooths of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.
Naveen, P.; Lingaraju, H. B.; Prasad, K. Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica, is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica. RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography–mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica. SUMMARY The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica. The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica. Abbreviations Used: M. indica: Mangifera indica, RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification. PMID:28539748
Naveen, P; Lingaraju, H B; Prasad, K Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica , is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica . RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography-mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica . The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica . The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica . Abbreviations Used: M. indica : Mangifera indica , RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
Geochemical Constraints for Mercury's PCA-Derived Geochemical Terranes
NASA Astrophysics Data System (ADS)
Stockstill-Cahill, K. R.; Peplowski, P. N.
2018-05-01
PCA-derived geochemical terranes provide a robust, analytical means of defining these terranes using strictly geochemical inputs. Using the end members derived in this way, we are able to assess the geochemical implications for Mercury.
Ultrasound viscoelasticity assessment using an adaptive torsional shear wave propagation method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouared, Abderrahmane; Kazemirad, Siavash; Montagnon, Emmanuel
2016-04-15
Purpose: Different approaches have been used in dynamic elastography to assess mechanical properties of biological tissues. Most techniques are based on a simple inversion based on the measurement of the shear wave speed to assess elasticity, whereas some recent strategies use more elaborated analytical or finite element method (FEM) models. In this study, a new method is proposed for the quantification of both shear storage and loss moduli of confined lesions, in the context of breast imaging, using adaptive torsional shear waves (ATSWs) generated remotely with radiation pressure. Methods: A FEM model was developed to solve the inverse wave propagationmore » problem and obtain viscoelastic properties of interrogated media. The inverse problem was formulated and solved in the frequency domain and its robustness to noise and geometric constraints was evaluated. The proposed model was validated in vitro with two independent rheology methods on several homogeneous and heterogeneous breast tissue-mimicking phantoms over a broad range of frequencies (up to 400 Hz). Results: Viscoelastic properties matched benchmark rheology methods with discrepancies of 8%–38% for the shear modulus G′ and 9%–67% for the loss modulus G″. The robustness study indicated good estimations of storage and loss moduli (maximum mean errors of 19% on G′ and 32% on G″) for signal-to-noise ratios between 19.5 and 8.5 dB. Larger errors were noticed in the case of biases in lesion dimension and position. Conclusions: The ATSW method revealed that it is possible to estimate the viscoelasticity of biological tissues with torsional shear waves when small biases in lesion geometry exist.« less
Robust Feedback Control of Flow Induced Structural Radiation of Sound
NASA Technical Reports Server (NTRS)
Heatwole, Craig M.; Bernhard, Robert J.; Franchek, Matthew A.
1997-01-01
A significant component of the interior noise of aircraft and automobiles is a result of turbulent boundary layer excitation of the vehicular structure. In this work, active robust feedback control of the noise due to this non-predictable excitation is investigated. Both an analytical model and experimental investigations are used to determine the characteristics of the flow induced structural sound radiation problem. The problem is shown to be broadband in nature with large system uncertainties associated with the various operating conditions. Furthermore the delay associated with sound propagation is shown to restrict the use of microphone feedback. The state of the art control methodologies, IL synthesis and adaptive feedback control, are evaluated and shown to have limited success for solving this problem. A robust frequency domain controller design methodology is developed for the problem of sound radiated from turbulent flow driven plates. The control design methodology uses frequency domain sequential loop shaping techniques. System uncertainty, sound pressure level reduction performance, and actuator constraints are included in the design process. Using this design method, phase lag was added using non-minimum phase zeros such that the beneficial plant dynamics could be used. This general control approach has application to lightly damped vibration and sound radiation problems where there are high bandwidth control objectives requiring a low controller DC gain and controller order.
An adaptive robust controller for time delay maglev transportation systems
NASA Astrophysics Data System (ADS)
Milani, Reza Hamidi; Zarabadipour, Hassan; Shahnazi, Reza
2012-12-01
For engineering systems, uncertainties and time delays are two important issues that must be considered in control design. Uncertainties are often encountered in various dynamical systems due to modeling errors, measurement noises, linearization and approximations. Time delays have always been among the most difficult problems encountered in process control. In practical applications of feedback control, time delay arises frequently and can severely degrade closed-loop system performance and in some cases, drives the system to instability. Therefore, stability analysis and controller synthesis for uncertain nonlinear time-delay systems are important both in theory and in practice and many analytical techniques have been developed using delay-dependent Lyapunov function. In the past decade the magnetic and levitation (maglev) transportation system as a new system with high functionality has been the focus of numerous studies. However, maglev transportation systems are highly nonlinear and thus designing controller for those are challenging. The main topic of this paper is to design an adaptive robust controller for maglev transportation systems with time-delay, parametric uncertainties and external disturbances. In this paper, an adaptive robust control (ARC) is designed for this purpose. It should be noted that the adaptive gain is derived from Lyapunov-Krasovskii synthesis method, therefore asymptotic stability is guaranteed.
NASA Astrophysics Data System (ADS)
Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.
2015-10-01
We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.
Kokaly, R.F.; King, T.V.V.; Hoefen, T.M.
2011-01-01
Identifying materials by measuring and analyzing their reflectance spectra has been an important method in analytical chemistry for decades. Airborne and space-based imaging spectrometers allow scientists to detect materials and map their distributions across the landscape. With new satellite-borne hyperspectral sensors planned for the future, for example, HYSPIRI (HYPerspectral InfraRed Imager), robust methods are needed to fully exploit the information content of hyperspectral remote sensing data. A method of identifying and mapping materials using spectral-feature based analysis of reflectance data in an expert-system framework called MICA (Material Identification and Characterization Algorithm) is described in this paper. The core concepts and calculations of MICA are presented. A MICA command file has been developed and applied to map minerals in the full-country coverage of the 2007 Afghanistan HyMap hyperspectral data. ?? 2011 IEEE.
Dubascoux, Stephane; Nicolas, Marine; Rime, Celine Fragniere; Payot, Janique Richoz; Poitevin, Eric
2015-01-01
A single-laboratory validation (SLV) is presented for the simultaneous determination of 10 ultratrace elements (UTEs) including aluminum (Al), arsenic (As), cadmium (Cd), cobalt (Co), chromium (Cr), mercury (Hg), molybdenum (Mo), lead (Pb), selenium (Se), and tin (Sn) in infant formulas, adult nutritionals, and milk based products by inductively coupled plasma (ICP)/MS after acidic pressure digestion. This robust and routine multielemental method is based on several official methods with modifications of sample preparation using either microwave digestion or high pressure ashing and of analytical conditions using ICP/MS with collision cell technology. This SLV fulfills AOAC method performance criteria in terms of linearity, specificity, sensitivity, precision, and accuracy and fully answers most international regulation limits for trace contaminants and/or recommended nutrient levels established for 10 UTEs in targeted matrixes.
Wang, Peng; Zheng, Yefeng; John, Matthias; Comaniciu, Dorin
2012-01-01
Dynamic overlay of 3D models onto 2D X-ray images has important applications in image guided interventions. In this paper, we present a novel catheter tracking for motion compensation in the Transcatheter Aortic Valve Implantation (TAVI). To address such challenges as catheter shape and appearance changes, occlusions, and distractions from cluttered backgrounds, we present an adaptive linear discriminant learning method to build a measurement model online to distinguish catheters from background. An analytic solution is developed to effectively and efficiently update the discriminant model and to minimize the classification errors between the tracking object and backgrounds. The online learned discriminant model is further combined with an offline learned detector and robust template matching in a Bayesian tracking framework. Quantitative evaluations demonstrate the advantages of this method over current state-of-the-art tracking methods in tracking catheters for clinical applications.
NASA Astrophysics Data System (ADS)
Rossetti, Cecilia; Świtnicka-Plak, Magdalena A.; Grønhaug Halvorsen, Trine; Cormack, Peter A. G.; Sellergren, Börje; Reubsaet, Léon
2017-03-01
Robust biomarker quantification is essential for the accurate diagnosis of diseases and is of great value in cancer management. In this paper, an innovative diagnostic platform is presented which provides automated molecularly imprinted solid-phase extraction (MISPE) followed by liquid chromatography-mass spectrometry (LC-MS) for biomarker determination using ProGastrin Releasing Peptide (ProGRP), a highly sensitive biomarker for Small Cell Lung Cancer, as a model. Molecularly imprinted polymer microspheres were synthesized by precipitation polymerization and analytical optimization of the most promising material led to the development of an automated quantification method for ProGRP. The method enabled analysis of patient serum samples with elevated ProGRP levels. Particularly low sample volumes were permitted using the automated extraction within a method which was time-efficient, thereby demonstrating the potential of such a strategy in a clinical setting.
NASA Astrophysics Data System (ADS)
Scovazzi, Guglielmo; Wheeler, Mary F.; Mikelić, Andro; Lee, Sanghyun
2017-04-01
The miscible displacement of one fluid by another in a porous medium has received considerable attention in subsurface, environmental and petroleum engineering applications. When a fluid of higher mobility displaces another of lower mobility, unstable patterns - referred to as viscous fingering - may arise. Their physical and mathematical study has been the object of numerous investigations over the past century. The objective of this paper is to present a review of these contributions with particular emphasis on variational methods. These algorithms are tailored to real field applications thanks to their advanced features: handling of general complex geometries, robustness in the presence of rough tensor coefficients, low sensitivity to mesh orientation in advection dominated scenarios, and provable convergence with fully unstructured grids. This paper is dedicated to the memory of Dr. Jim Douglas Jr., for his seminal contributions to miscible displacement and variational numerical methods.
Geyer, Pierre M; Hulme, Matthew C; Irving, Joseph P B; Thompson, Paul D; Ashton, Ryan N; Lee, Robert J; Johnson, Lucy; Marron, Jack; Banks, Craig E; Sutcliffe, Oliver B
2016-11-01
The prevalence of new psychoactive substances (NPSs) in forensic casework has increased prominently in recent years. This has given rise to significant legal and analytical challenges in the identification of these substances. The requirement for validated, robust and rapid testing methodologies for these compounds is obvious. This study details the analysis of 13 synthesised diphenidine derivatives encountered in casework using presumptive testing, thin layer chromatography and gas chromatography-mass spectrometry (GC-MS). Specifically, the validated GC-MS method provides, for the first time, both a general screening method and quantification of the active components for seized solid samples, both in their pure form and in the presence of common adulterants. Graphical Abstract Chemical synthesis and forensic analysis of 13 diphenidine-derived new psychoactive substance(s).
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Granzotto, Clara; Sutherland, Ken
2017-03-07
This paper reports an improved method for the identification of Acacia gum in cultural heritage samples using matrix assisted laser desorption ionization-time-of-flight mass spectrometry (MALDI-TOF MS) after enzymatic digestion of the polysaccharide component. The analytical strategy was optimized using a reference Acacia gum (gum arabic, sp. A. senegal) and provided an unambiguous MS profile of the gum, characterized by specific and recognized oligosaccharides, from as little as 0.1 μg of material. The enhanced experimental approach with reduced detection limit was successfully applied to the analysis of naturally aged (∼80 year) gum arabic samples, pure and mixed with lead white pigment, and allowed the detection of gum arabic in samples from a late painting (1949/1954) by Georges Braque in the collection of the Art Institute of Chicago. This first application of the technique to characterize microsamples from a painting, in conjunction with analyses by gas chromatography/mass spectrometry (GC/MS), provided important insights into Braque's unusual mixed paint media that are also helpful to inform appropriate conservation treatments for his works. The robustness of the analytical strategy due to the reproducibility of the gum MS profile, even in the presence of other organic and inorganic components, together with the minimal sample size required, demonstrate the value of this new MALDI-TOF MS method as an analytical tool for the identification of gum arabic in microsamples from museum artifacts.
Enhancement of integrated photonic biosensing by magnetic controlled nano-particles
NASA Astrophysics Data System (ADS)
Peserico, N.; Sharma, P. Pratim; Belloni, A.; Damin, F.; Chiari, M.; Bertacco, R.; Melloni, A.
2018-02-01
Integrated Mach-Zehnder interferometers, ring resonators, Bragg reflectors or simple waveguides are commonly used as photonic biosensing elements. They can be used for label-free detection relating the changes in the optical signal in realtime, as optical power or spectral response, to the presence and even the quantity of a target analyte on the surface of the photonic waveguide. The label-free method has advantages in term of sample preparation but it is more sensitive to spurious effects such as temperature and refractive index sample variation, biological noise, etc. Label methods can be more robust, more sensitive and able to manipulate the biological targets. In this work, we present an innovative labeled biosensing technique exploiting magnetic nano-beads for enhancement of sensitivity over integrated optic microrings. A sandwich binding is exploited to bring the magnetic labels close to the surface of the optical waveguide and interact with the optical evanescent field. The proximity and the quantity of the magnetic nano-beads are seen as a shift in the resonance of the microring. Detection of antibodies permits to reach a high level of sensitivity, down to 8 pM with a high confidence level. The sizes of the nano-beads are 50 to 250 nm. Furthermore, time-varying magnetic fields permit to manipulate the beads and even induce specific signals on the detected light to easy the processing and provide a reliable identification of the presence of the desired analyte. Multiple analytes detection is also possible.
Thomas, Jason M; Chakraborty, Banani; Sen, Dipankar; Yu, Hua-Zhong
2012-08-22
A general approach is described for the de novo design and construction of aptamer-based electrochemical biosensors, for potentially any analyte of interest (ranging from small ligands to biological macromolecules). As a demonstration of the approach, we report the rapid development of a made-to-order electronic sensor for a newly reported early biomarker for lung cancer (CTAP III/NAP2). The steps include the in vitro selection and characterization of DNA aptamer sequences, design and biochemical testing of wholly DNA sensor constructs, and translation to a functional electrode-bound sensor format. The working principle of this distinct class of electronic biosensors is the enhancement of DNA-mediated charge transport in response to analyte binding. We first verify such analyte-responsive charge transport switching in solution, using biochemical methods; successful sensor variants were then immobilized on gold electrodes. We show that using these sensor-modified electrodes, CTAP III/NAP2 can be detected with both high specificity and sensitivity (K(d) ~1 nM) through a direct electrochemical reading. To investigate the underlying basis of analyte binding-induced conductivity switching, we carried out Förster Resonance Energy Transfer (FRET) experiments. The FRET data establish that analyte binding-induced conductivity switching in these sensors results from very subtle structural/conformational changes, rather than large scale, global folding events. The implications of this finding are discussed with respect to possible charge transport switching mechanisms in electrode-bound sensors. Overall, the approach we describe here represents a unique design principle for aptamer-based electrochemical sensors; its application should enable rapid, on-demand access to a class of portable biosensors that offer robust, inexpensive, and operationally simplified alternatives to conventional antibody-based immunoassays.
Kling, Maximilian; Seyring, Nicole; Tzanova, Polia
2016-09-01
Economic instruments provide significant potential for countries with low municipal waste management performance in decreasing landfill rates and increasing recycling rates for municipal waste. In this research, strengths and weaknesses of landfill tax, pay-as-you-throw charging systems, deposit-refund systems and extended producer responsibility schemes are compared, focusing on conditions in countries with low waste management performance. In order to prioritise instruments for implementation in these countries, the analytic hierarchy process is applied using results of a literature review as input for the comparison. The assessment reveals that pay-as-you-throw is the most preferable instrument when utility-related criteria are regarded (wb = 0.35; analytic hierarchy process distributive mode; absolute comparison) mainly owing to its waste prevention effect, closely followed by landfill tax (wb = 0.32). Deposit-refund systems (wb = 0.17) and extended producer responsibility (wb = 0.16) rank third and fourth, with marginal differences owing to their similar nature. When cost-related criteria are additionally included in the comparison, landfill tax seems to provide the highest utility-cost ratio. Data from literature concerning cost (contrary to utility-related criteria) is currently not sufficiently available for a robust ranking according to the utility-cost ratio. In general, the analytic hierarchy process is seen as a suitable method for assessing economic instruments in waste management. Independent from the chosen analytic hierarchy process mode, results provide valuable indications for policy-makers on the application of economic instruments, as well as on their specific strengths and weaknesses. Nevertheless, the instruments need to be put in the country-specific context along with the results of this analytic hierarchy process application before practical decisions are made. © The Author(s) 2016.
Pacheco-Fernández, Idaira; Najafi, Ali; Pino, Verónica; Anderson, Jared L; Ayala, Juan H; Afonso, Ana M
2016-09-01
Several crosslinked polymeric ionic liquid (PIL)-based sorbent coatings of different nature were prepared by UV polymerization onto nitinol wires. They were evaluated in a direct-immersion solid-phase microextraction (DI-SPME) method in combination with high-performance liquid chromatography (HPLC) and diode array detection (DAD). The studied PIL coatings contained either vinyl alkyl or vinylbenzyl imidazolium-based (ViCnIm- or ViBCnIm-) IL monomers with different anions, as well as different dicationic IL crosslinkers. The analytical performance of these PIL-based SPME coatings was firstly evaluated for the extraction of a group of 10 different model analytes, including hydrocarbons and phenols, while exhaustively comparing the performance with commercial SPME fibers such as polydimethylsyloxane (PDMS), polyacrylate (PA) and polydimethylsiloxane/divinylbenzene (PDMS/DVB), and using all fibers under optimized conditions. Those fibers exhibiting a high selectivity for polar compounds were selected to carry out an analytical method for a group of 5 alkylphenols, including bisphenol-A (BPA) and nonylphenol (n-NP). Under optimum conditions, average relative recoveries of 108% and inter-day precision values (3 non-consecutive days) lower than 19% were obtained for a spiked level of 10µgL(-1). Correlations coefficients for the overall method ranged between 0.990 and 0.999, and limits of detection were down to 1µgL(-1). Tap water, river water, and bottled water were analyzed to evaluate matrix effects. Comparison with the PA fiber was also performed in terms of analytical performance. Partition coefficients (logKfs) of the alkylphenols to the SPME coating varied from 1.69 to 2.45 for the most efficient PIL-based fiber, and from 1.58 to 2.30 for the PA fiber. These results agree with those obtained by the normalized calibration slopes, pointing out the affinity of these PILs-based coatings. Copyright © 2016 Elsevier B.V. All rights reserved.
Ślączka-Wilk, Magdalena M; Włodarczyk, Elżbieta; Kaleniecka, Aleksandra; Zarzycki, Paweł K
2017-07-01
There is increasing interest in the development of simple analytical systems enabling the fast screening of target components in complex samples. A number of newly invented protocols are based on quasi separation techniques involving microfluidic paper-based analytical devices and/or micro total analysis systems. Under such conditions, the quantification of target components can be performed mainly due to selective detection. The main goal of this paper is to demonstrate that miniaturized planar chromatography has the capability to work as an efficient separation and quantification tool for the analysis of multiple targets within complex environmental samples isolated and concentrated using an optimized SPE method. In particular, we analyzed various samples collected from surface water ecosystems (lakes, rivers, and the Baltic Sea of Middle Pomerania in the northern part of Poland) in different seasons, as well as samples collected during key wastewater technological processes (originating from the "Jamno" wastewater treatment plant in Koszalin, Poland). We documented that the multiple detection of chromatographic spots on RP-18W microplates-under visible light, fluorescence, and fluorescence quenching conditions, and using the visualization reagent phosphomolybdic acid-enables fast and robust sample classification. The presented data reveal that the proposed micro-TLC system is useful, inexpensive, and can be considered as a complementary method for the fast control of treated sewage water discharged by a municipal wastewater treatment plant, particularly for the detection of low-molecular mass micropollutants with polarity ranging from estetrol to progesterone, as well as chlorophyll-related dyes. Due to the low consumption of mobile phases composed of water-alcohol binary mixtures (less than 1 mL/run for the simultaneous separation of up to nine samples), this method can be considered an environmentally friendly and green chemistry analytical tool. The described analytical protocol can be complementary to those involving classical column chromatography (HPLC) or various planar microfluidic devices.
Rezende, Patrícia Sueli; Carmo, Geraldo Paulo do; Esteves, Eduardo Gonçalves
2015-06-01
We report the use of a method to determine the refractive index of copper(II) serum (RICS) in milk as a tool to detect the fraudulent addition of water. This practice is highly profitable, unlawful, and difficult to deter. The method was optimized and validated and is simple, fast and robust. The optimized method yielded statistically equivalent results compared to the reference method with an accuracy of 0.4% and quadrupled analytical throughput. Trueness, precision (repeatability and intermediate precision) and ruggedness are determined to be satisfactory at a 95.45% confidence level. The expanded uncertainty of the measurement was ±0.38°Zeiss at the 95.45% confidence level (k=3.30), corresponding to 1.03% of the minimum measurement expected in adequate samples (>37.00°Zeiss). Copyright © 2015 Elsevier B.V. All rights reserved.
2014-09-30
resulted in the identification of metabolite patterns indicative of flight line exposure when compared to non -flight line control subjects...virtually non -invasive sample collection, minimal sample processing, robust and stable analytical platform, with excellent analytical and biological...identification of metabolite patterns indicative of flight line exposure when compared to non -flight line control subjects. Regardless of fuel (JP-4 or
Optomechanical frequency combs
NASA Astrophysics Data System (ADS)
Miri, Mohammad-Ali; D’Aguanno, Giuseppe; Alù, Andrea
2018-04-01
We study the formation of frequency combs in a single-mode optomechanical cavity. The comb is composed of equidistant spectral lines centered at the pump laser frequency and located at different harmonics of the mechanical resonator. We investigate the classical nonlinear dynamics of such system and find analytically the onset of parametric instability resulting in the breakdown of a stationary continuous wave intracavity field into a periodic train of pulses, which in the Fourier domain gives rise to a broadband frequency comb. Different dynamical regimes, including a stationary state, frequency comb generation and chaos, and their dependence on the system parameters, are studied both analytically and numerically. Interestingly, the comb generation is found to be more robust in the poor cavity limit, where optical loss is equal or larger than the mechanical resonance frequency. Our results show that optomechanical resonators open exciting opportunities for microwave photonics as compact and robust sources of frequency combs with megahertz line spacing.
Analytical Parameters of an Amperometric Glucose Biosensor for Fast Analysis in Food Samples
2017-01-01
Amperometric biosensors based on the use of glucose oxidase (GOx) are able to combine the robustness of electrochemical techniques with the specificity of biological recognition processes. However, very little information can be found in literature about the fundamental analytical parameters of these sensors. In this work, the analytical behavior of an amperometric biosensor based on the immobilization of GOx using a hydrogel (Chitosan) onto highly ordered titanium dioxide nanotube arrays (TiO2NTAs) has been evaluated. The GOx–Chitosan/TiO2NTAs biosensor showed a sensitivity of 5.46 μA·mM−1 with a linear range from 0.3 to 1.5 mM; its fundamental analytical parameters were studied using a commercial soft drink. The obtained results proved sufficient repeatability (RSD = 1.9%), reproducibility (RSD = 2.5%), accuracy (95–105% recovery), and robustness (RSD = 3.3%). Furthermore, no significant interferences from fructose, ascorbic acid and citric acid were obtained. In addition, the storage stability was further examined, after 30 days, the GOx–Chitosan/TiO2NTAs biosensor retained 85% of its initial current response. Finally, the glucose content of different food samples was measured using the biosensor and compared with the respective HPLC value. In the worst scenario, a deviation smaller than 10% was obtained among the 20 samples evaluated. PMID:29135931
A fluorescence anisotropy method for measuring protein concentration in complex cell culture media.
Groza, Radu Constantin; Calvet, Amandine; Ryder, Alan G
2014-04-22
The rapid, quantitative analysis of the complex cell culture media used in biopharmaceutical manufacturing is of critical importance. Requirements for cell culture media composition profiling, or changes in specific analyte concentrations (e.g. amino acids in the media or product protein in the bioprocess broth) often necessitate the use of complicated analytical methods and extensive sample handling. Rapid spectroscopic methods like multi-dimensional fluorescence (MDF) spectroscopy have been successfully applied for the routine determination of compositional changes in cell culture media and bioprocess broths. Quantifying macromolecules in cell culture media is a specific challenge as there is a need to implement measurements rapidly on the prepared media. However, the use of standard fluorescence spectroscopy is complicated by the emission overlap from many media components. Here, we demonstrate how combining anisotropy measurements with standard total synchronous fluorescence spectroscopy (TSFS) provides a rapid, accurate quantitation method for cell culture media. Anisotropy provides emission resolution between large and small fluorophores while TSFS provides a robust measurement space. Model cell culture media was prepared using yeastolate (2.5 mg mL(-1)) spiked with bovine serum albumin (0 to 5 mg mL(-1)). Using this method, protein emission is clearly discriminated from background yeastolate emission, allowing for accurate bovine serum albumin (BSA) quantification over a 0.1 to 4.0 mg mL(-1) range with a limit of detection (LOD) of 13.8 μg mL(-1). Copyright © 2014. Published by Elsevier B.V.
Luo, Yu-Syuan; Furuya, Shinji; Chiu, Weihsueh; Rusyn, Ivan
2018-01-01
Trichloroethylene (TCE) is a ubiquitous environmental toxicant that is a liver and kidney carcinogen. Conjugation of TCE with glutathione (GSH) leads to formation of nepthrotoxic and mutagenic metabolites postulated to be critical for kidney cancerdevelopment; however, relatively little is known regarding their tissue levels as previous analytical methods for their detection lacked sensitivity. Here, an LC-MS/MS-based method for simultaneous detection of S-(1,2-dichlorovinyl)-glutathione (DCVG), S-(1,2-dichlorovinyl)-L-cysteine (DCVC), and N-acetyl-S-(1,2-dichlorovinyl)-L-cysteine (NAcDCVC) in multiple mouse tissues was developed. This analytical method is rapid, sensitive (limits of detection (LOD) 3-30 fmol across metabolites and tissues), and robust to quantify all three metabolites in liver, kidneys, and serum. The method was used to characterize inter-tissue and inter-strain variability in formation of conjugative metabolites of TCE. Single oral dose of TCE (24, 240 or 800 mg/kg) was administered to male mice from 20 inbred strains of Collaborative Cross. Inter-strain variability in the levels of DCVG, DCVC, and NAcDCVC (GSD = 1.6-2.9) was observed. Whereas NAcDCVC was distributed equally among analyzed tissues, highest levels of DCVG were detected in liver and DCVC in kidneys. Evidence indicated that inter-strain variability in conjugative metabolite formation of TCE might affect susceptibility to adverse health effects and that this method might aid in filling data gaps in human health assessment of TCE.
Enhancing robustness and immunization in geographical networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang Liang; Department of Physics, Lanzhou University, Lanzhou 730000; Yang Kongqing
2007-03-15
We find that different geographical structures of networks lead to varied percolation thresholds, although these networks may have similar abstract topological structures. Thus, strategies for enhancing robustness and immunization of a geographical network are proposed. Using the generating function formalism, we obtain an explicit form of the percolation threshold q{sub c} for networks containing arbitrary order cycles. For three-cycles, the dependence of q{sub c} on the clustering coefficients is ascertained. The analysis substantiates the validity of the strategies with analytical evidence.
Jovanović, Marko; Rakić, Tijana; Tumpa, Anja; Jančić Stojanović, Biljana
2015-06-10
This study presents the development of hydrophilic interaction liquid chromatographic method for the analysis of iohexol, its endo-isomer and three impurities following Quality by Design (QbD) approach. The main objective of the method was to identify the conditions where adequate separation quality in minimal analysis duration could be achieved within a robust region that guarantees the stability of method performance. The relationship between critical process parameters (acetonitrile content in the mobile phase, pH of the water phase and ammonium acetate concentration in the water phase) and critical quality attributes is created applying design of experiments methodology. The defined mathematical models and Monte Carlo simulation are used to evaluate the risk of uncertainty in models prediction and incertitude in adjusting the process parameters and to identify the design space. The borders of the design space are experimentally verified and confirmed that the quality of the method is preserved in this region. Moreover, Plackett-Burman design is applied for experimental robustness testing and method is fully validated to verify the adequacy of selected optimal conditions: the analytical column ZIC HILIC (100 mm × 4.6 mm, 5 μm particle size); mobile phase consisted of acetonitrile-water phase (72 mM ammonium acetate, pH adjusted to 6.5 with glacial acetic acid) (86.7:13.3) v/v; column temperature 25 °C, mobile phase flow rate 1 mL min(-1), wavelength of detection 254 nm. Copyright © 2015 Elsevier B.V. All rights reserved.
Raizman, Joshua E; Taylor, Katherine; Parshuram, Christopher; Colantonio, David A
2017-05-01
Milrinone is a potent selective phosphodiesterase type III inhibitor which stimulates myocardial function and improves myocardial relaxation. Although therapeutic monitoring is crucial to maintain therapeutic outcome, little data is available. A proof-of-principle study has been initiated in our institution to evaluate the clinical impact of optimizing milrinone dosing through therapeutic drug monitoring (TDM) in children following cardiac surgery. We developed a robust LC-MS/MS method to quantify milrinone in serum from pediatric patients in real-time. A liquid-liquid extraction procedure was used to prepare samples for analysis prior to measurement by LC-MS/MS. Performance characteristics, such as linearity, limit of quantitation (LOQ) and precision, were assessed. Patient samples were acquired post-surgery and analyzed to determine the concentration-time profile of the drug as well as to track turn-around-times. Within day precision was <8.3% across 3 levels of QC. Between-day precision was <12%. The method was linear from 50 to 800μg/l; the lower limit of quantification was 22μg/l. Comparison with another LC-MS/MS method showed good agreement. Using this simplified method, turnaround times within 3-6h were achievable, and patient drug profiles demonstrated that some milrinone levels were either sub-therapeutic or in the toxic range, highlighting the importance for milrinone TDM. This simplified and quick method proved to be analytically robust and able to provide therapeutic monitoring of milrinone in real-time in patients post-cardiac surgery. Copyright © 2017. Published by Elsevier B.V.
RP-HPLC×HILIC chromatography for quantifying ertapenem sodium with a look at green chemistry.
Pedroso, Tahisa M; Medeiros, Ana C D; Salgado, Herida R N
2016-11-01
Ertapenem sodium is a polar and ionizable compound; therefore, it has little retention on traditional C18 columns in reverse-phase high-performance liquid chromatography, even using a highly-aqueous mobile phase that can result in dewetting in the stationary phase. Thus, the most coherent process for ERTM is to develop a method for Hydrophilic Interaction Chromatography. However, for the traditional methods in HILIC, the use of a highly organic mobile phase is necessary; usually an amount exceeding 80% acetonitrile is necessary. On the other hand, the RP-HPLC mode is considered for the analysis technique, which is more often used for quantification of substances, and new columns are often introduced to analyze different groups of compounds. Two new analytical methods have been developed for routine analysis. The proposed chromatographic method was adequate and advantageous by presenting simplicity, linearity, precision, accuracy, robustness, detection limits, and satisfactory quantification. Analytical methods are constantly undergoing changes and improvements. Researchers worldwide are rapidly adopting Green Chemistry. The development of new pharmaceutical methods based in Green chemistry has been encouraged by universities and the pharmaceutical industry. Issues related to green chemistry are in evidence and they have been featured in international journals of high impact. The methods described here have economic advantages and they feature an eco-friendly focus, which is discussed in this work. This work was developed with an environmental conscience, always looking to minimize the possible generated organic waste. Therefore, discussion on this aspect is included. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Juesas, P.; Ramasso, E.
2016-12-01
Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.
Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul
2013-01-01
Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters. PMID:24204873
Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul
2013-01-01
Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters.
Heterodimer Autorepression Loop: A Robust and Flexible Pulse-Generating Genetic Module
NASA Astrophysics Data System (ADS)
Lannoo, B.; Carlon, E.; Lefranc, M.
2016-07-01
We investigate the dynamics of the heterodimer autorepression loop (HAL), a small genetic module in which a protein A acts as an autorepressor and binds to a second protein B to form an A B dimer. For suitable values of the rate constants, the HAL produces pulses of A alternating with pulses of B . By means of analytical and numerical calculations, we show that the duration of A pulses is extremely robust against variation of the rate constants while the duration of the B pulses can be flexibly adjusted. The HAL is thus a minimal genetic module generating robust pulses with a tunable duration, an interesting property for cellular signaling.
Causon, Tim J; Hann, Stephan
2016-09-28
Fermentation and cell culture biotechnology in the form of so-called "cell factories" now play an increasingly significant role in production of both large (e.g. proteins, biopharmaceuticals) and small organic molecules for a wide variety of applications. However, associated metabolic engineering optimisation processes relying on genetic modification of organisms used in cell factories, or alteration of production conditions remain a challenging undertaking for improving the final yield and quality of cell factory products. In addition to genomic, transcriptomic and proteomic workflows, analytical metabolomics continues to play a critical role in studying detailed aspects of critical pathways (e.g. via targeted quantification of metabolites), identification of biosynthetic intermediates, and also for phenotype differentiation and the elucidation of previously unknown pathways (e.g. via non-targeted strategies). However, the diversity of primary and secondary metabolites and the broad concentration ranges encompassed during typical biotechnological processes means that simultaneous extraction and robust analytical determination of all parts of interest of the metabolome is effectively impossible. As the integration of metabolome data with transcriptome and proteome data is an essential goal of both targeted and non-targeted methods addressing production optimisation goals, additional sample preparation steps beyond necessary sampling, quenching and extraction protocols including clean-up, analyte enrichment, and derivatisation are important considerations for some classes of metabolites, especially those present in low concentrations or exhibiting poor stability. This contribution critically assesses the potential of current sample preparation strategies applied in metabolomic studies of industrially-relevant cell factory organisms using mass spectrometry-based platforms primarily coupled to liquid-phase sample introduction (i.e. flow injection, liquid chromatography, or capillary electrophoresis). Particular focus is placed on the selectivity and degree of enrichment attainable, as well as demands of speed, absolute quantification, robustness and, ultimately, consideration of fully-integrated bioanalytical solutions to optimise sample handling and throughput. Copyright © 2016 Elsevier B.V. All rights reserved.
Direct-Potential (dpf) Analysis for the a 3Π1-X 1Σ+ System of I35/37Cl.
NASA Astrophysics Data System (ADS)
Kobayashi, Shinji; Nishimiya, Nobuo; Yukiya, Tokio; Suzuki, Masao; Le Roy, Robert J.
2016-06-01
The goal of this research is to obtain an optimal, portable, global description of, and summary of the dynamical properties of, the A 3Π1 and X 1Σ+ states of I35/37Cl, by using `direct potential fits' (DPFs) to all of the available spectroscopic data for this system to determine optimal analytic potential energy functions for these two states that represent all of those data (on average), within the experimental uncertainties. The DPF method compares observed spectroscopic data with synthetic data generated by solving the radial Schrödinger equation for the upper and lower level of every observed transition for some parameterized analytic potential function(s), and using least-squares fits to the data to optimize those parameters. The present work uses the Morse/Long-Range (MLR) potential function form because it is very flexible, can incorporate the correct theoretically known inverse-power-sum long-range behaviour, is everywhere continuous and differentiable to all orders, and has robust extrapolation properties at both large and small distances. The DPF approach also tends to require fewer fitting parameters than do traditional Dunham analyses, as well as having much more robust extrapolation properties in both the v and J domains. The present work combines the data for the A 3Π1 and X 1Σ+ states obtained in 1980 by Coxon et al. using UV and near-infrared grating spectrometers, with our measurements in the 0.7-0.8μm region, obtained using a CW Ti:Sapphire Ring Laser. The results of this study and our new fully analytic potential energy functions for the A 3Π1 and X 1Σ+ states of ICl will be presented. J.A. Coxon, R.M. Gordon and M.A. Wickramaaratchi, J. Mol. Spectrosc. 79 (1980) 363 and 380. T.Yukiya, N. Nishimiya and M. Suzuki, J. Mol. Spectrosc. 269 (2011) 193.
Designing Phononic Crystals with Wide and Robust Band Gaps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Zian; Chen, Yanyu; Yang, Haoxiang
Here, phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with widemore » and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.« less
Designing Phononic Crystals with Wide and Robust Band Gaps
Jia, Zian; Chen, Yanyu; Yang, Haoxiang; ...
2018-04-16
Here, phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with widemore » and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.« less
Designing Phononic Crystals with Wide and Robust Band Gaps
NASA Astrophysics Data System (ADS)
Jia, Zian; Chen, Yanyu; Yang, Haoxiang; Wang, Lifeng
2018-04-01
Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.
Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine
2011-03-01
International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.
Torres, Daiane Placido; Martins-Teixeira, Maristela Braga; Cadore, Solange; Queiroz, Helena Müller
2015-01-01
A method for the determination of total mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) has been validated following international foodstuff protocols in order to fulfill the Brazilian National Residue Control Plan. The experimental parameters have been previously studied and optimized according to specific legislation on validation and inorganic contaminants in foodstuff. Linearity, sensitivity, specificity, detection and quantification limits, precision (repeatability and within-laboratory reproducibility), robustness as well as accuracy of the method have been evaluated. Linearity of response was satisfactory for the two range concentrations available on the TDA AAS equipment, between approximately 25.0 and 200.0 μg kg(-1) (square regression) and 250.0 and 2000.0 μg kg(-1) (linear regression) of mercury. The residues for both ranges were homoscedastic and independent, with normal distribution. Correlation coefficients obtained for these ranges were higher than 0.995. Limits of quantification (LOQ) and of detection of the method (LDM), based on signal standard deviation (SD) for a low-in-mercury sample, were 3.0 and 1.0 μg kg(-1), respectively. Repeatability of the method was better than 4%. Within-laboratory reproducibility achieved a relative SD better than 6%. Robustness of the current method was evaluated and pointed sample mass as a significant factor. Accuracy (assessed as the analyte recovery) was calculated on basis of the repeatability, and ranged from 89% to 99%. The obtained results showed the suitability of the present method for direct mercury measurement in fresh fish and shrimp samples and the importance of monitoring the analysis conditions for food control purposes. Additionally, the competence of this method was recognized by accreditation under the standard ISO/IEC 17025.
NASA Astrophysics Data System (ADS)
Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.
2005-02-01
A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.
NASA Astrophysics Data System (ADS)
Braga, Jez Willian Batista; Trevizan, Lilian Cristina; Nunes, Lidiane Cristina; Rufini, Iolanda Aparecida; Santos, Dário, Jr.; Krug, Francisco José
2010-01-01
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance, but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation.
Wu, Chunsheng; Liu, Gaohuan; Huang, Chong; Liu, Qingsheng; Guan, Xudong
2018-04-25
The Yellow River Delta (YRD), located in Yellow River estuary, is characterized by rich ecological system types, and provides habitats or migration stations for wild birds, all of which makes the delta an ecological barrier or ecotone for inland areas. Nevertheless, the abundant natural resources of YRD have brought huge challenges to the area, and frequent human activities and natural disasters have damaged the ecological systems seriously, and certain ecological functions have been threatened. Therefore, it is necessary to determine the status of the ecological environment based on scientific methods, which can provide scientifically robust data for the managers or stakeholders to adopt timely ecological protection measures. The aim of this study was to obtain the spatial distribution of the ecological vulnerability (EV) in YRD based on 21 indicators selected from underwater status, soil condition, land use, landform, vegetation cover, meteorological conditions, ocean influence, and social economy. In addition, the fuzzy analytic hierarchy process (FAHP) method was used to obtain the weights of the selected indicators, and a fuzzy logic model was constructed to obtain the result. The result showed that the spatial distribution of the EV grades was regular, while the fuzzy membership of EV decreased gradually from the coastline to inland area, especially around the river crossing, where it had the lowest EV. Along the coastline, the dikes had an obviously protective effect for the inner area, while the EV was higher in the area where no dikes were built. This result also showed that the soil condition and groundwater status were highly related to the EV spatially, with the correlation coefficients −0.55 and −0.74 respectively, and human activities had exerted considerable pressure on the ecological environment.
Determination of cocaine and metabolites in hair by column-switching LC-MS-MS analysis.
Alves, Marcela Nogueira Rabelo; Zanchetti, Gabriele; Piccinotti, Alberto; Tameni, Silvia; De Martinis, Bruno Spinosa; Polettini, Aldo
2013-07-01
A method for rapid, selective, and robust determination of cocaine (CO) and metabolites in 5-mg hair samples was developed and fully validated using a column-switching liquid chromatography-tandem mass spectrometry system (LC-MS-MS). Hair samples were decontaminated, segmented, incubated overnight in diluted HCl, and centrifuged, and the diluted (1:10 with distilled water) extracts were analyzed in positive ionization mode monitoring two reactions per analyte. Quantifier transitions were: m/z 304.2→182.2 for CO, m/z 290.1→168.1 for benzoylecgonine (BE), and m/z 318.2→196.2 for cocaethylene (CE). The lower limit of quantification (LLOQ) was set at 0.05 ng/mg for CO and CE, and 0.012 ng/mg for BE. Imprecision and inaccuracy at LLOQ were lower than 20 % for all analytes. Linearity ranged between 0.05 and 50.0 ng/mg for CO and CE and 0.012 and 12.50 ng/mg for BE. Selectivity, matrix effect, process efficiency, recovery, carryover, cross talk, and autosampler stability were also evaluated during validation. Eighteen real hair samples and five samples from a commercial proficiency testing program were comparatively examined with the proposed multidimensional chromatography coupled with tandem mass spectrometry procedure and our reference gas chromatography coupled to mass spectrometry (GC-MS) method. Compared with our reference GC-MS method, column-switching technique and the high sensitivity of the tandem mass spectrometry detection system allowed to significantly reduce sample amount (×10) with increased sensitivity (×2) and sample throughput (×4), to simplify sample preparation, and to avoid that interfering compounds and ions impaired the ionization and detection of the analytes and deteriorate the performance of the ion source.
Zubair, Abdulrazaq; Pappoe, Michael; James, Lesley A; Hawboldt, Kelly
2015-12-18
This paper presents an important new approach to improving the timeliness of Total Petroleum Hydrocarbon (TPH) analysis in the soil by Gas Chromatography - Flame Ionization Detector (GC-FID) using the CCME Canada-Wide Standard reference method. The Canada-Wide Standard (CWS) method is used for the analysis of petroleum hydrocarbon compounds across Canada. However, inter-laboratory application of this method for the analysis of TPH in the soil has often shown considerable variability in the results. This could be due, in part, to the different gas chromatography (GC) conditions, other steps involved in the method, as well as the soil properties. In addition, there are differences in the interpretation of the GC results, which impacts the determination of the effectiveness of remediation at hydrocarbon-contaminated sites. In this work, multivariate experimental design approach was used to develop and validate the analytical method for a faster quantitative analysis of TPH in (contaminated) soil. A fractional factorial design (fFD) was used to screen six factors to identify the most significant factors impacting the analysis. These factors included: injection volume (μL), injection temperature (°C), oven program (°C/min), detector temperature (°C), carrier gas flow rate (mL/min) and solvent ratio (v/v hexane/dichloromethane). The most important factors (carrier gas flow rate and oven program) were then optimized using a central composite response surface design. Robustness testing and validation of model compares favourably with the experimental results with percentage difference of 2.78% for the analysis time. This research successfully reduced the method's standard analytical time from 20 to 8min with all the carbon fractions eluting. The method was successfully applied for fast TPH analysis of Bunker C oil contaminated soil. A reduced analytical time would offer many benefits including an improved laboratory reporting times, and overall improved clean up efficiency. The method was successfully applied for the analysis of TPH of Bunker C oil in contaminated soil. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Uludağ, Yildiz; Piletsky, Sergey A; Turner, Anthony P F; Cooper, Matthew A
2007-11-01
Biomimetic recognition elements employed for the detection of analytes are commonly based on proteinaceous affibodies, immunoglobulins, single-chain and single-domain antibody fragments or aptamers. The alternative supra-molecular approach using a molecularly imprinted polymer now has proven utility in numerous applications ranging from liquid chromatography to bioassays. Despite inherent advantages compared with biochemical/biological recognition (which include robustness, storage endurance and lower costs) there are few contributions that describe quantitative analytical applications of molecularly imprinted polymers for relevant small molecular mass compounds in real-world samples. There is, however, significant literature describing the use of low-power, portable piezoelectric transducers to detect analytes in environmental monitoring and other application areas. Here we review the combination of molecularly imprinted polymers as recognition elements with piezoelectric biosensors for quantitative detection of small molecules. Analytes are classified by type and sample matrix presentation and various molecularly imprinted polymer synthetic fabrication strategies are also reviewed.
Ahamad, Javed; Amin, Saima; Mir, Showkat R
2015-08-01
Gymnemic acid and charantin are well-established antidiabetic phytosterols found in Gymnema sylvestre and Momordica charantia, respectively. The fact that these plants are often used together in antidiabetic poly-herbal formulations lured us to develop an HPTLC densitometric method for the simultaneous quantification of their bioactive compounds. Indirect estimation of gymnemic acid as gymnemagenin and charantin as β-sitosterol after hydrolysis has been proposed. Aluminum-backed silica gel 60 F254 plates (20 × 10 cm) were used as stationary phase and toluene-ethyl acetate-methanol-formic acid (60 : 20 : 15 : 5, v/v) as mobile phase. Developed chromatogram was scanned at 550 nm after derivatization with modified vanillin-sulfuric acid reagent. Regression analysis of the calibration data showed an excellent linear relationship between peak area versus concentration of the analytes. Linearity was found to be in the range of 500-2,500 and 100-500 ng/band for gymnemagenin and β-sitosterol, respectively. The suitability of the developed HPTLC method for simultaneous estimation of analytes was established by validating it as per the ICH guidelines. The limits of detection and quantification for gymnemagenin were found to be ≈60 and ≈190 ng/band, and those for β-sitosterol ≈30 and ≈90 ng/band, respectively. The developed method was found to be linear (r(2) = 0.9987 and 0.9943), precise (relative standard deviation <1.5 and <2% for intra- and interday precision) and accurate (mean recovery ranged between 98.43-101.44 and 98.68-100.20%) for gymnemagenin and β-sitosterol, respectively. The proposed method was also found specific and robust for quantification of both the analytes and was successfully applied to herbal drugs and in-house herbal formulation without any interference. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A two-dimensional composite grid numerical model based on the reduced system for oceanography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Y.F.; Browning, G.L.; Chesshire, G.
The proper mathematical limit of a hyperbolic system with multiple time scales, the reduced system, is a system that contains no high-frequency motions and is well posed if suitable boundary conditions are chosen for the initial-boundary value problem. The composite grid method, a robust and efficient grid-generation technique that smoothly and accurately treats general irregular boundaries, is used to approximate the two-dimensional version of the reduced system for oceanography on irregular ocean basins. A change-of-variable technique that substantially increases the accuracy of the model and a method for efficiently solving the elliptic equation for the geopotential are discussed. Numerical resultsmore » are presented for circular and kidney-shaped basins by using a set of analytic solutions constructed in this paper.« less
Update on hepatitis B and C virus diagnosis
Villar, Livia Melo; Cruz, Helena Medina; Barbosa, Jakeline Ribeiro; Bezerra, Cristianne Sousa; Portilho, Moyra Machado; Scalioni, Letícia de Paula
2015-01-01
Viral hepatitis B and C virus (HBV and HCV) are responsible for the most of chronic liver disease worldwide and are transmitted by parenteral route, sexual and vertical transmission. One important measure to reduce the burden of these infections is the diagnosis of acute and chronic cases of HBV and HCV. In order to provide an effective diagnosis and monitoring of antiviral treatment, it is important to choose sensitive, rapid, inexpensive, and robust analytical methods. Primary diagnosis of HBV and HCV infection is made by using serological tests for detecting antigens and antibodies against these viruses. In order to confirm primary diagnosis, to quantify viral load, to determine genotypes and resistance mutants for antiviral treatment, qualitative and quantitative molecular tests are used. In this manuscript, we review the current serological and molecular methods for the diagnosis of hepatitis B and C. PMID:26568915
Second International Conference on Accelerating Biopharmaceutical Development
2009-01-01
The Second International Conference on Accelerating Biopharmaceutical Development was held in Coronado, California. The meeting was organized by the Society for Biological Engineering (SBE) and the American Institute of Chemical Engineers (AIChE); SBE is a technological community of the AIChE. Bob Adamson (Wyeth) and Chuck Goochee (Centocor) were co-chairs of the event, which had the theme “Delivering cost-effective, robust processes and methods quickly and efficiently.” The first day focused on emerging disruptive technologies and cutting-edge analytical techniques. Day two featured presentations on accelerated cell culture process development, critical quality attributes, specifications and comparability, and high throughput protein formulation development. The final day was dedicated to discussion of technology options and new analysis methods provided by emerging disruptive technologies; functional interaction, integration and synergy in platform development; and rapid and economic purification process development. PMID:20065637
Field Performance of ISFET based Deep Ocean pH Sensors
NASA Astrophysics Data System (ADS)
Branham, C. W.; Murphy, D. J.
2017-12-01
Historically, ocean pH time series data was acquired from infrequent shipboard grab samples and measured using labor intensive spectrophotometry methods. However, with the introduction of robust and stable ISFET pH sensors for use in ocean applications a paradigm shift in the methods used to acquire long-term pH time series data has occurred. Sea-Bird Scientific played a critical role in the adoption this new technology by commercializing the SeaFET pH sensor and float pH Sensor developed by the MBARI chemical sensor group. Sea-Bird Scientific continues to advance this technology through a concerted effort to improve pH sensor accuracy and reliability by characterizing their performance in the laboratory and field. This presentation will focus on calibration of the ISFET pH sensor, evaluate its analytical performance, and validate performance using recent field data.
Understanding the Potential of WO₃ Based Sensors for Breath Analysis.
Staerz, Anna; Weimar, Udo; Barsan, Nicolae
2016-10-29
Tungsten trioxide is the second most commonly used semiconducting metal oxide in gas sensors. Semiconducting metal oxide (SMOX)-based sensors are small, robust, inexpensive and sensitive, making them highly attractive for handheld portable medical diagnostic detectors. WO₃ is reported to show high sensor responses to several biomarkers found in breath, e.g., acetone, ammonia, carbon monoxide, hydrogen sulfide, toluene, and nitric oxide. Modern material science allows WO₃ samples to be tailored to address certain sensing needs. Utilizing recent advances in breath sampling it will be possible in the future to test WO₃-based sensors in application conditions and to compare the sensing results to those obtained using more expensive analytical methods.
Understanding the Potential of WO3 Based Sensors for Breath Analysis
Staerz, Anna; Weimar, Udo; Barsan, Nicolae
2016-01-01
Tungsten trioxide is the second most commonly used semiconducting metal oxide in gas sensors. Semiconducting metal oxide (SMOX)-based sensors are small, robust, inexpensive and sensitive, making them highly attractive for handheld portable medical diagnostic detectors. WO3 is reported to show high sensor responses to several biomarkers found in breath, e.g., acetone, ammonia, carbon monoxide, hydrogen sulfide, toluene, and nitric oxide. Modern material science allows WO3 samples to be tailored to address certain sensing needs. Utilizing recent advances in breath sampling it will be possible in the future to test WO3-based sensors in application conditions and to compare the sensing results to those obtained using more expensive analytical methods. PMID:27801881
Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter
2018-03-20
Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.
Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.
NASA Astrophysics Data System (ADS)
Molina-Perez, Edmundo
It is widely recognized that international environmental technological change is key to reduce the rapidly rising greenhouse gas emissions of emerging nations. In 2010, the United Nations Framework Convention on Climate Change (UNFCCC) Conference of the Parties (COP) agreed to the creation of the Green Climate Fund (GCF). This new multilateral organization has been created with the collective contributions of COP members, and has been tasked with directing over USD 100 billion per year towards investments that can enhance the development and diffusion of clean energy technologies in both advanced and emerging nations (Helm and Pichler, 2015). The landmark agreement arrived at the COP 21 has reaffirmed the key role that the GCF plays in enabling climate mitigation as it is now necessary to align large scale climate financing efforts with the long-term goals agreed at Paris 2015. This study argues that because of the incomplete understanding of the mechanics of international technological change, the multiplicity of policy options and ultimately the presence of climate and technological change deep uncertainty, climate financing institutions such as the GCF, require new analytical methods for designing long-term robust investment plans. Motivated by these challenges, this dissertation shows that the application of new analytical methods, such as Robust Decision Making (RDM) and Exploratory Modeling (Lempert, Popper and Bankes, 2003) to the study of international technological change and climate policy provides useful insights that can be used for designing a robust architecture of international technological cooperation for climate change mitigation. For this study I developed an exploratory dynamic integrated assessment model (EDIAM) which is used as the scenario generator in a large computational experiment. The scope of the experimental design considers an ample set of climate and technological scenarios. These scenarios combine five sources of uncertainty: climate change, elasticity of substitution between renewable and fossil energy and three different sources of technological uncertainty (i.e. R&D returns, innovation propensity and technological transferability). The performance of eight different GCF and non-GCF based policy regimes is evaluated in light of various end-of-century climate policy targets. Then I combine traditional scenario discovery data mining methods (Bryant and Lempert, 2010) with high dimensional stacking methods (Suzuki, Stem and Manzocchi, 2015; Taylor et al., 2006; LeBlanc, Ward and Wittels, 1990) to quantitatively characterize the conditions under which it is possible to stabilize greenhouse gas emissions and keep temperature rise below 2°C before the end of the century. Finally, I describe a method by which it is possible to combine the results of scenario discovery with high-dimensional stacking to construct a dynamic architecture of low cost technological cooperation. This dynamic architecture consists of adaptive pathways (Kwakkel, Haasnoot and Walker, 2014; Haasnoot et al., 2013) which begin with carbon taxation across both regions as a critical near term action. Then in subsequent phases different forms of cooperation are triggered depending on the unfolding climate and technological conditions. I show that there is no single policy regime that dominates over the entire uncertainty space. Instead I find that it is possible to combine these different architectures into a dynamic framework for technological cooperation across regions that can be adapted to unfolding climate and technological conditions which can lead to a greater rate of success and to lower costs in meeting the end-of-century climate change objectives agreed at the 2015 Paris Conference of the Parties. Keywords: international technological change, emerging nations, climate change, technological uncertainties, Green Climate Fund.
Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-06-01
Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.
Capillarity Guided Patterning of Microliquids.
Kang, Myeongwoo; Park, Woohyun; Na, Sangcheol; Paik, Sang-Min; Lee, Hyunjae; Park, Jae Woo; Kim, Ho-Young; Jeon, Noo Li
2015-06-01
Soft lithography and other techniques have been developed to investigate biological and chemical phenomena as an alternative to photolithography-based patterning methods that have compatibility problems. Here, a simple approach for nonlithographic patterning of liquids and gels inside microchannels is described. Using a design that incorporates strategically placed microstructures inside the channel, microliquids or gels can be spontaneously trapped and patterned when the channel is drained. The ability to form microscale patterns inside microfluidic channels using simple fluid drain motion offers many advantages. This method is geometrically analyzed based on hydrodynamics and verified with simulation and experiments. Various materials (i.e., water, hydrogels, and other liquids) are successfully patterned with complex shapes that are isolated from each other. Multiple cell types are patterned within the gels. Capillarity guided patterning (CGP) is fast, simple, and robust. It is not limited by pattern shape, size, cell type, and material. In a simple three-step process, a 3D cancer model that mimics cell-cell and cell-extracellular matrix interactions is engineered. The simplicity and robustness of the CGP will be attractive for developing novel in vitro models of organ-on-a-chip and other biological experimental platforms amenable to long-term observation of dynamic events using advanced imaging and analytical techniques. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Striking against bioterrorism with advanced proteomics and reference methods.
Armengaud, Jean
2017-01-01
The intentional use by terrorists of biological toxins as weapons has been of great concern for many years. Among the numerous toxins produced by plants, animals, algae, fungi, and bacteria, ricin is one of the most scrutinized by the media because it has already been used in biocrimes and acts of bioterrorism. Improving the analytical toolbox of national authorities to monitor these potential bioweapons all at once is of the utmost interest. MS/MS allows their absolute quantitation and exhibits advantageous sensitivity, discriminative power, multiplexing possibilities, and speed. In this issue of Proteomics, Gilquin et al. (Proteomics 2017, 17, 1600357) present a robust multiplex assay to quantify a set of eight toxins in the presence of a complex food matrix. This MS/MS reference method is based on scheduled SRM and high-quality standards consisting of isotopically labeled versions of these toxins. Their results demonstrate robust reliability based on rather loose scheduling of SRM transitions and good sensitivity for the eight toxins, lower than their oral median lethal doses. In the face of an increased threat from terrorism, relevant reference assays based on advanced proteomics and high-quality companion toxin standards are reliable and firm answers. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results
NASA Technical Reports Server (NTRS)
Wells, D. N.; Allen, P. A.
2012-01-01
An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.
Kovács, Béla; Kántor, Lajos Kristóf; Croitoru, Mircea Dumitru; Kelemen, Éva Katalin; Obreja, Mona; Nagy, Előd Ernő; Székely-Szentmiklósi, Blanka; Gyéresi, Árpád
2018-06-01
A reverse-phase HPLC (RP-HPLC) method was developed for strontium ranelate using a full factorial, screening experimental design. The analytical procedure was validated according to international guidelines for linearity, selectivity, sensitivity, accuracy and precision. A separate experimental design was used to demonstrate the robustness of the method. Strontium ranelate was eluted at 4.4 minutes and showed no interference with the excipients used in the formulation, at 321 nm. The method is linear in the range of 20-320 μg mL-1 (R2 = 0.99998). Recovery, tested in the range of 40-120 μg mL-1, was found to be 96.1-102.1 %. Intra-day and intermediate precision RSDs ranged from 1.0-1.4 and 1.2-1.4 %, resp. The limit of detection and limit of quantitation were 0.06 and 0.20 μg mL-1, resp. The proposed technique is fast, cost-effective, reliable and reproducible, and is proposed for the routine analysis of strontium ranelate.
Piñeiro, Zulema; Cantos-Villar, Emma; Palma, Miguel; Puertas, Belen
2011-11-09
A validated HPLC method with fluorescence detection for the simultaneous quantification of hydroxytyrosol and tyrosol in red wines is described. Detection conditions for both compounds were optimized (excitation at 279 and 278 and emission at 631 and 598 nm for hydroxytyrosol and tyrosol, respectively). The validation of the analytical method was based on selectivity, linearity, robustness, detection and quantification limits, repeatability, and recovery. The detection and quantification limits in red wines were set at 0.023 and 0.076 mg L(-1) for hydroxytyrosol and at 0.007 and 0.024 mg L(-1) for tyrosol determination, respectively. Precision values, both within-day and between-day (n = 5), remained below 3% for both compounds. In addition, a fractional factorial experimental design was developed to analyze the influence of six different conditions on analysis. The final optimized HPLC-fluorescence method allowed the analysis of 30 nonpretreated Spanish red wines to evaluate their hydroxytyrosol and tyrosol contents.
Artificial neural network methods in quantum mechanics
NASA Astrophysics Data System (ADS)
Lagaris, I. E.; Likas, A.; Fotiadis, D. I.
1997-08-01
In a previous article we have shown how one can employ Artificial Neural Networks (ANNs) in order to solve non-homogeneous ordinary and partial differential equations. In the present work we consider the solution of eigenvalue problems for differential and integrodifferential operators, using ANNs. We start by considering the Schrödinger equation for the Morse potential that has an analytically known solution, to test the accuracy of the method. We then proceed with the Schrödinger and the Dirac equations for a muonic atom, as well as with a nonlocal Schrödinger integrodifferential equation that models the n + α system in the framework of the resonating group method. In two dimensions we consider the well-studied Henon-Heiles Hamiltonian and in three dimensions the model problem of three coupled anharmonic oscillators. The method in all of the treated cases proved to be highly accurate, robust and efficient. Hence it is a promising tool for tackling problems of higher complexity and dimensionality.
Research on Robot Pose Control Technology Based on Kinematics Analysis Model
NASA Astrophysics Data System (ADS)
Liu, Dalong; Xu, Lijuan
2018-01-01
In order to improve the attitude stability of the robot, proposes an attitude control method of robot based on kinematics analysis model, solve the robot walking posture transformation, grasping and controlling the motion planning problem of robot kinematics. In Cartesian space analytical model, using three axis accelerometer, magnetometer and the three axis gyroscope for the combination of attitude measurement, the gyroscope data from Calman filter, using the four element method for robot attitude angle, according to the centroid of the moving parts of the robot corresponding to obtain stability inertia parameters, using random sampling RRT motion planning method, accurate operation to any position control of space robot, to ensure the end effector along a prescribed trajectory the implementation of attitude control. The accurate positioning of the experiment is taken using MT-R robot as the research object, the test robot. The simulation results show that the proposed method has better robustness, and higher positioning accuracy, and it improves the reliability and safety of robot operation.
NASA Astrophysics Data System (ADS)
Dodd, Michael; Ferrante, Antonino
2017-11-01
Our objective is to perform DNS of finite-size droplets that are evaporating in isotropic turbulence. This requires fully resolving the process of momentum, heat, and mass transfer between the droplets and surrounding gas. We developed a combined volume-of-fluid (VOF) method and low-Mach-number approach to simulate this flow. The two main novelties of the method are: (i) the VOF algorithm captures the motion of the liquid gas interface in the presence of mass transfer due to evaporation and condensation without requiring a projection step for the liquid velocity, and (ii) the low-Mach-number approach allows for local volume changes caused by phase change while the total volume of the liquid-gas system is constant. The method is verified against an analytical solution for a Stefan flow problem, and the D2 law is verified for a single droplet in quiescent gas. We also demonstrate the schemes robustness when performing DNS of an evaporating droplet in forced isotropic turbulence.
Quantitative analysis of boeravinones in the roots of Boerhaavia Diffusa by UPLC/PDA.
Bairwa, Khemraj; Srivastava, Amit; Jachak, Sanjay Madhukar
2014-01-01
Boerhaavia diffusa is a perennial herb belonging to Nyctaginaceae. Various classes of chemical constituents such as phenolics (boeravinones), terpenoids and organic acids have been reported in B. diffusa roots. As boeravinones have been proposed as putative active constituents for the anti-cancer, spasmolytic and anti-inflammatory activities exhibited by B. diffusa extracts, it is worthwhile developing and validating an ultra-performance liquid chromatography (UPLC) method for analysis of boeravinones in B. diffusa roots. To develop and validate a simple, accurate, robust and rapid UPLC analytical method for quality control of B. diffusa roots. Samples for analysis were prepared by refluxing powdered root material with methanol for 2 h. The extracts were concentrated, dried and stored at -20°C until their use. A UPLC with photodiode array (PDA) method was developed and validated for the quantification of boeravinones in the roots of B. diffusa. The separation of boeravinones was achieved using a BEH Shield C18 -column (2.1 × 100 mm, 1.7 µm) with gradient elution of methanol and water (0.1% acetic acid), at a flow rate of 0.4 mL/min and detection was carried out at λmax 273 nm. The UPLC method developed showed good linearity (r(2) ≥ 0.9999), accuracy and precision. The UPLC method developed provided a selective, sensitive and rapid analytical method for the quantification of boeravinones in B. diffusa roots. All the validation parameters were found to be within the permissible limits as per International Conference on Harmonisation guidelines. Copyright © 2014 John Wiley & Sons, Ltd.
Emergence of robustness in networks of networks
NASA Astrophysics Data System (ADS)
Roth, Kevin; Morone, Flaviano; Min, Byungjoon; Makse, Hernán A.
2017-06-01
A model of interdependent networks of networks (NONs) was introduced recently [Proc. Natl. Acad. Sci. (USA) 114, 3849 (2017), 10.1073/pnas.1620808114] in the context of brain activation to identify the neural collective influencers in the brain NON. Here we investigate the emergence of robustness in such a model, and we develop an approach to derive an exact expression for the random percolation transition in Erdös-Rényi NONs of this kind. Analytical calculations are in agreement with numerical simulations, and highlight the robustness of the NON against random node failures, which thus presents a new robust universality class of NONs. The key aspect of this robust NON model is that a node can be activated even if it does not belong to the giant mutually connected component, thus allowing the NON to be built from below the percolation threshold, which is not possible in previous models of interdependent networks. Interestingly, the phase diagram of the model unveils particular patterns of interconnectivity for which the NON is most vulnerable, thereby marking the boundary above which the robustness of the system improves with increasing dependency connections.
Pharmaceutical and analytical evaluation of triphalaguggulkalpa tablets
Savarikar, Shreeram S.; Barbhind, Maneesha M.; Halde, Umakant K.; Kulkarni, Alpana P.
2011-01-01
Aim of the Study: Development of standardized, synergistic, safe and effective traditional herbal formulations with robust scientific evidence can offer faster and more economical alternatives for the treatment of disease. The main objective was to develop a method of preparation of guggulkalpa tablets so that the tablets meet the criteria of efficacy, stability, and safety. Materials and Methods: Triphalaguggulkalpa tablet, described in sharangdharsanhita and containing guggul and triphala powder, was used as a model drug. Preliminary experiments on marketed triphalaguggulkalpa tablets exhibited delayed in vitro disintegration that indicated probable delayed in vivo disintegration. The study involved preparation of triphalaguggulkalpa tablets by Ayurvedic text methods and by wet granulation, dry granulation, and direct compression method. The tablets were evaluated for loss on drying, volatile oil content, % solubility, and steroidal content. The tablets were evaluated for performance tests like weight variation, disintegration, and hardness. Results: It was observed that triphalaguggulkalpa tablets, prepared by direct compression method, complied with the hardness and disintegration tests, whereas tablets prepared by Ayurvedic text methods failed. Conclusion: Direct compression is the best method of preparing triphalaguggulkalpa tablets. PMID:21731383
An Unstructured Finite Volume Approach for Structural Dynamics in Response to Fluid Motions.
Xia, Guohua; Lin, Ching-Long
2008-04-01
A new cell-vortex unstructured finite volume method for structural dynamics is assessed for simulations of structural dynamics in response to fluid motions. A robust implicit dual-time stepping method is employed to obtain time accurate solutions. The resulting system of algebraic equations is matrix-free and allows solid elements to include structure thickness, inertia, and structural stresses for accurate predictions of structural responses and stress distributions. The method is coupled with a fluid dynamics solver for fluid-structure interaction, providing a viable alternative to the finite element method for structural dynamics calculations. A mesh sensitivity test indicates that the finite volume method is at least of second-order accuracy. The method is validated by the problem of vortex-induced vibration of an elastic plate with different initial conditions and material properties. The results are in good agreement with existing numerical data and analytical solutions. The method is then applied to simulate a channel flow with an elastic wall. The effects of wall inertia and structural stresses on the fluid flow are investigated.
Portable Enzyme-Paper Biosensors Based on Redox-Active CeO2 Nanoparticles.
Karimi, A; Othman, A; Andreescu, S
2016-01-01
Portable, nanoparticle (NP)-enhanced enzyme sensors have emerged as powerful devices for qualitative and quantitative analysis of a variety of analytes for biomedicine, environmental applications, and pharmaceutical fields. This chapter describes a method for the fabrication of a portable, paper-based, inexpensive, robust enzyme biosensor for the detection of substrates of oxidase enzymes. The method utilizes redox-active NPs of cerium oxide (CeO2) as a sensing platform which produces color in response to H2O2 generated by the action of oxidase enzymes on their corresponding substrates. This avoids the use of peroxidases which are routinely used in conjunction with glucose oxidase. The CeO2 particles serve dual roles, as high surface area supports to anchor high loadings of the enzyme as well as a color generation reagent, and the particles are recycled multiple times for the reuse of the biosensor. These sensors are small, light, disposable, inexpensive, and they can be mass produced by standard, low-cost printing methods. All reagents needed for the analysis are embedded within the paper matrix, and sensors stored over extended periods of time without performance loss. This novel sensor is a general platform for the in-field detection of analytes that are substrates for oxidase enzymes in clinical, food, and environmental samples. © 2016 Elsevier Inc. All rights reserved.
Lima, Marcelo B; Barreto, Inakã S; Andrade, Stéfani Iury E; Almeida, Luciano F; Araújo, Mário C U
2012-10-15
In this study, a micro-flow-batch analyzer (μFBA) with solenoid micro-pumps for the photometric determination of iodate in table salt is described. The method is based on the reaction of iodate with iodide to form molecular iodine followed by the reaction with N,N-diethyl-p-phenylenediamine (DPD). The analytical signal was measured at 520 nm using a green LED integrated into the μFBA built in the urethane-acrylate resin. The analytical curve for iodate was linear in the range of 0.01-10.0 mg L(-1) with a correlation coefficient of 0.997. The limit of detection and relative standard deviation were estimated at 0.004 mg L(-1) and<1.5% (n=3), respectively. The accuracy was assessed through recovery test (97.6-103.5%) and independent analysis by a conventional titrimetric method. Comparing this technique with the conventional method, no statistically significant differences were observed when applying the paired t-test at a 95% confidence level. The proposed microsystem using solenoid micro-pumps presented satisfactory robustness and high sampling rate (170 h(-1)), with a low reagents consumption and a low cost to build the device. The proposed microsystem is a new alternative for automatic determination of iodate in table salt, comparing satisfactory to the recently flow system. Copyright © 2012 Elsevier B.V. All rights reserved.
Robust control algorithms for Mars aerobraking
NASA Technical Reports Server (NTRS)
Shipley, Buford W., Jr.; Ward, Donald T.
1992-01-01
Four atmospheric guidance concepts have been adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. The first two offer improvements to the Analytic Predictor Corrector (APC) to increase its robustness to density variations. The second two are variations of a new Liapunov tracking exit phase algorithm, developed to guide the vehicle along a reference trajectory. These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. MARSGRAM is used to develop realistic atmospheres for the study. When square wave density pulses perturb the atmosphere all four controllers are successful. The algorithms are tested against atmospheres where the inbound and outbound density functions are different. Square wave density pulses are again used, but only for the outbound leg of the trajectory. Additionally, sine waves are used to perturb the density function. The new algorithms are found to be more robust than any previously tested and a Liapunov controller is selected as the most robust control algorithm overall examined.
Ferreira, Vicente; Herrero, Paula; Zapata, Julián; Escudero, Ana
2015-08-14
SPME is extremely sensitive to experimental parameters affecting liquid-gas and gas-solid distribution coefficients. Our aims were to measure the weights of these factors and to design a multivariate strategy based on the addition of a pool of internal standards, to minimize matrix effects. Synthetic but real-like wines containing selected analytes and variable amounts of ethanol, non-volatile constituents and major volatile compounds were prepared following a factorial design. The ANOVA study revealed that even using a strong matrix dilution, matrix effects are important and additive with non-significant interaction effects and that it is the presence of major volatile constituents the most dominant factor. A single internal standard provided a robust calibration for 15 out of 47 analytes. Then, two different multivariate calibration strategies based on Partial Least Square Regression were run in order to build calibration functions based on 13 different internal standards able to cope with matrix effects. The first one is based in the calculation of Multivariate Internal Standards (MIS), linear combinations of the normalized signals of the 13 internal standards, which provide the expected area of a given unit of analyte present in each sample. The second strategy is a direct calibration relating concentration to the 13 relative areas measured in each sample for each analyte. Overall, 47 different compounds can be reliably quantified in a single fully automated method with overall uncertainties better than 15%. Copyright © 2015 Elsevier B.V. All rights reserved.
SAS molecular tests Escherichia coli O157 detection kit. Performance tested method 031203.
Bapanpally, Chandra; Montier, Laura; Khan, Shah; Kasra, Akif; Brunelle, Sharon L
2014-01-01
The SAS Molecular tests Escherichia coli O157 Detection method, a loop-mediated isothermal amplification method, performed as well as or better than the U.S. Department of Agriculture, Food Safety Inspection Service Microbiology Laboratory Guidebook and the U.S. Food and Drug Administration Bacteriological Analytical Manual reference methods for ground beef, beef trim, bagged mixed lettuce, and fresh spinach. Ground beef (30% fat, 25 g test portion) was validated for 7-8 h enrichment, leafy greens were validated in a 6-7 h enrichment, and ground beef (30% fat, 375 g composite test portion) and beef trim (375 g composite test portion) were validated in a 16-20 h enrichment. The method performance for meat and leafy green matrixes was also shown to be acceptable under conditions of co-enrichment with Salmonella. Thus, after a short co-enrichment step, ground beef, beef trim, lettuce, and spinach can be tested for both Salmonella and E. coli O157. The SAS Molecular tests Salmonella Detection Kit was validated using the same test portions as for the SAS Molecular tests E. coli O157 Detection Kit and those results are presented in a separate report. Inclusivity and exclusivity testing revealed no false negatives and no false positives among the 50 E. coli 0157 strains, including H7 and non-motile strains, and 30 non-E. coli O157 strains examined. Finally, the method was shown to be robust when variations to DNA extract hold time and DNA volume were varied. The method comparison and robustness data suggest a full 7 h enrichment time should be used for 25 g ground beef test portions.
Design Considerations for Human Rating of Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Parkinson, Douglas
2010-01-01
I.Human-rating is specific to each engine; a. Context of program/project must be understood. b. Engine cannot be discussed independently from vehicle and mission. II. Utilize a logical combination of design, manufacturing, and test approaches a. Design 1) It is crucial to know the potential ways a system can fail, and how a failure can propagate; 2) Fault avoidance, fault tolerance, DFMR, caution and warning all have roles to play. b. Manufacturing and Assembly; 1) As-built vs. as-designed; 2) Review procedures for assembly and maintenance periodically; and 3) Keep personnel trained and certified. c. There is no substitute for test: 1) Analytical tools are constantly advancing, but still need test data for anchoring assumptions; 2) Demonstrate robustness and explore sensitivities; 3) Ideally, flight will be encompassed by ground test experience. III. Consistency and repeatability is key in production a. Maintain robust processes and procedures for inspection and quality control based upon development and qualification experience; b. Establish methods to "spot check" quality and consistency in parts: 1) Dedicated ground test engines; 2) Random components pulled from the line/lot to go through "enhanced" testing.
Proposed biomimetic molecular sensor array for astrobiology applications
NASA Astrophysics Data System (ADS)
Cullen, D. C.; Grant, W. D.; Piletsky, S.; Sims, M. R.
2001-08-01
A key objective of future astrobiology lander missions, e.g. to Mars and Europa, is the detection of biomarkers - molecules whose presence indicates the existence of either current or extinct life. To address limitations of current analytical methods for biomarker detection, we describe the methodology of a new project for demonstration of a robust molecular-recognition sensor array for astrobiology biomarkers. The sensor array will be realised by assembling components that have been demonstrated individually in previous or current research projects. The major components are (1) robust artificial molecular receptors comprised of molecular imprinted polymer (MIP) recognition systems and (2) a sensor array comprised of both optical and electrochemical sensor elements. These components will be integrated together using ink-jet printing technology coupled with in situ photo-polymerisation of MIPs. For demonstration, four model biomarkers are chosen as targets and represent various classes of potential biomarkers. Objectives of the proposed work include (1) demonstration of practical proof-of-concept, (2) identify areas for further development and (3) provide performance and design data for follow-up projects leading to astrobiology missions.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Cao, Jianmin; Sun, Na; Yu, Weisong; Pang, Xueli; Lin, Yingnan; Kong, Fanyu; Qiu, Jun
2016-12-01
A sensitive and robust multiresidue method for the simultaneous analysis of 114 pesticides in tobacco was developed based on solid-phase extraction coupled with gas chromatography and tandem mass spectrometry. In this strategy, tobacco samples were extracted with acetonitrile and cleaned up with a multilayer solid-phase extraction cartridge Cleanert TPT using acetonitrile/toluene (3:1) as the elution solvent. Two internal standards of different polarity were used to meet simultaneous pesticides quantification demands in the tobacco matrix. Satisfactory linearity in the range of 10-500 ng/mL was obtained for all 114 pesticides with linear regression coefficients higher than 0.994. The limit of detection and limit of quantification values were 0.02-5.27 and 0.06-17.6 ng/g, respectively. For most of the pesticides, acceptable recoveries in the range of 70-120% and repeatabilities (relative standard deviation) of <11% were achieved at spiking levels of 20, 100, and 400 ng/g. Compared with the reported multiresidue analytical method, the proposed method provided a cleaner test solution with smaller amounts of pigments, fatty acids as well as other undesirable interferences. The development and validation of the high sensitivity, high selectivity, easy automation, and high-throughput analytical method meant that it could be successfully used for the determination of pesticides in tobacco samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
New method for analyzing dark matter direct detection data
NASA Astrophysics Data System (ADS)
Davis, Jonathan H.; Enßlin, Torsten; BÅ`hm, Céline
2014-02-01
The experimental situation of dark matter direct detection has reached an exciting crossroads, with potential hints of a discovery of dark matter (DM) from the CDMS, CoGeNT, CRESST-II and DAMA experiments in tension with null results from xenon-based experiments such as XENON100 and LUX. Given the present controversial experimental status, it is important that the analytical method used to search for DM in direct detection experiments is both robust and flexible enough to deal with data for which the distinction between signal and background points is difficult, and hence where the choice between setting a limit or defining a discovery region is debatable. In this article we propose a novel (Bayesian) analytical method, which can be applied to all direct detection experiments and which extracts the maximum amount of information from the data. We apply our method to the XENON100 experiment data as a worked example, and show that firstly our exclusion limit at 90% confidence is in agreement with their own for the 225 live days data, but is several times stronger for the 100 live days data. Secondly we find that, due to the two points at low values of S1 and S2 in the 225 days data set, our analysis points to either weak consistency with low-mass dark matter or the possible presence of an unknown background. Given the null result from LUX, the latter scenario seems the more plausible.
Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.
Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D
2013-01-01
We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.
Fouad, Marwa A; Tolba, Enas H; El-Shal, Manal A; El Kerdawy, Ahmed M
2018-05-11
The justified continuous emerging of new β-lactam antibiotics provokes the need for developing suitable analytical methods that accelerate and facilitate their analysis. A face central composite experimental design was adopted using different levels of phosphate buffer pH, acetonitrile percentage at zero time and after 15 min in a gradient program to obtain the optimum chromatographic conditions for the elution of 31 β-lactam antibiotics. Retention factors were used as the target property to build two QSRR models utilizing the conventional forward selection and the advanced nature-inspired firefly algorithm for descriptor selection, coupled with multiple linear regression. The obtained models showed high performance in both internal and external validation indicating their robustness and predictive ability. Williams-Hotelling test and student's t-test showed that there is no statistical significant difference between the models' results. Y-randomization validation showed that the obtained models are due to significant correlation between the selected molecular descriptors and the analytes' chromatographic retention. These results indicate that the generated FS-MLR and FFA-MLR models are showing comparable quality on both the training and validation levels. They also gave comparable information about the molecular features that influence the retention behavior of β-lactams under the current chromatographic conditions. We can conclude that in some cases simple conventional feature selection algorithm can be used to generate robust and predictive models comparable to that are generated using advanced ones. Copyright © 2018 Elsevier B.V. All rights reserved.
Stefanuto, Pierre-Hugues; Perrault, Katelynn A; Stadler, Sonja; Pesesse, Romain; LeBlanc, Helene N; Forbes, Shari L; Focant, Jean-François
2015-06-01
In forensic thanato-chemistry, the understanding of the process of soft tissue decomposition is still limited. A better understanding of the decomposition process and the characterization of the associated volatile organic compounds (VOC) can help to improve the training of victim recovery (VR) canines, which are used to search for trapped victims in natural disasters or to locate corpses during criminal investigations. The complexity of matrices and the dynamic nature of this process require the use of comprehensive analytical methods for investigation. Moreover, the variability of the environment and between individuals creates additional difficulties in terms of normalization. The resolution of the complex mixture of VOCs emitted by a decaying corpse can be improved using comprehensive two-dimensional gas chromatography (GC × GC), compared to classical single-dimensional gas chromatography (1DGC). This study combines the analytical advantages of GC × GC coupled to time-of-flight mass spectrometry (TOFMS) with the data handling robustness of supervised multivariate statistics to investigate the VOC profile of human remains during early stages of decomposition. Various supervised multivariate approaches are compared to interpret the large data set. Moreover, early decomposition stages of pig carcasses (typically used as human surrogates in field studies) are also monitored to obtain a direct comparison of the two VOC profiles and estimate the robustness of this human decomposition analog model. In this research, we demonstrate that pig and human decomposition processes can be described by the same trends for the major compounds produced during the early stages of soft tissue decomposition.
Connecting Core Percolation and Controllability of Complex Networks
Jia, Tao; Pósfai, Márton
2014-01-01
Core percolation is a fundamental structural transition in complex networks related to a wide range of important problems. Recent advances have provided us an analytical framework of core percolation in uncorrelated random networks with arbitrary degree distributions. Here we apply the tools in analysis of network controllability. We confirm analytically that the emergence of the bifurcation in control coincides with the formation of the core and the structure of the core determines the control mode of the network. We also derive the analytical expression related to the controllability robustness by extending the deduction in core percolation. These findings help us better understand the interesting interplay between the structural and dynamical properties of complex networks. PMID:24946797
NASA Astrophysics Data System (ADS)
Cabello, Violeta
2017-04-01
This communication will present the advancement of an innovative analytical framework for the analysis of Water-Energy-Food-Climate Nexus termed Quantitative Story Telling (QST). The methodology is currently under development within the H2020 project MAGIC - Moving Towards Adaptive Governance in Complexity: Informing Nexus Security (www.magic-nexus.eu). The key innovation of QST is that it bridges qualitative and quantitative analytical tools into an iterative research process in which each step is built and validated in interaction with stakeholders. The qualitative analysis focusses on the identification of the narratives behind the development of relevant WEFC-Nexus policies and innovations. The quantitative engine is the Multi-Scale Analysis of Societal and Ecosystem Metabolism (MuSIASEM), a resource accounting toolkit capable of integrating multiple analytical dimensions at different scales through relational analysis. Although QST may not be labelled a data-driven but a story-driven approach, I will argue that improving models per se may not lead to an improved understanding of WEF-Nexus problems unless we are capable of generating more robust narratives to frame them. The communication will cover an introduction to MAGIC project, the basic concepts of QST and a case study focussed on agricultural production in a semi-arid region in Southern Spain. Data requirements for this case study and the limitations to find, access or estimate them will be presented alongside a reflection on the relation between analytical scales and data availability.
Explicit robust schemes for implementation of general principal value-based constitutive models
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.
NASA Astrophysics Data System (ADS)
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
Addressing Climate Change in Long-Term Water Planning Using Robust Decisionmaking
NASA Astrophysics Data System (ADS)
Groves, D. G.; Lempert, R.
2008-12-01
Addressing climate change in long-term natural resource planning is difficult because future management conditions are deeply uncertain and the range of possible adaptation options are so extensive. These conditions pose challenges to standard optimization decision-support techniques. This talk will describe a methodology called Robust Decisionmaking (RDM) that can complement more traditional analytic approaches by utilizing screening-level water management models to evaluate large numbers of strategies against a wide range of plausible future scenarios. The presentation will describe a recent application of the methodology to evaluate climate adaptation strategies for the Inland Empire Utilities Agency in Southern California. This project found that RDM can provide a useful way for addressing climate change uncertainty and identify robust adaptation strategies.
Cristale, Joyce; Lacorte, Silvia
2013-08-30
This study presents a multiresidue method for simultaneous extraction, clean-up and analysis of priority and emerging flame retardants in sediment, sewage sludge and dust. Studied compounds included eight polybrominated diphenyl ethers congeners, nine new brominated flame retardants and ten organophosphorus flame retardants. The analytical method was based on ultrasound-assisted extraction with ethyl acetate/cyclohexane (5:2, v/v), clean-up with Florisil cartridges and analysis by gas chromatography coupled to tandem mass spectrometry (GC-EI-MS/MS). Method development and validation protocol included spiked samples, certified reference material (for dust), and participation in an interlaboratory calibration. The method proved to be efficient and robust for extraction and determination of three families of flame retardants families in the studied solid matrices. The method was applied to river sediment, sewage sludge and dust samples, and allowed detection of 24 among the 27 studied flame retardants. Organophosphate esters, BDE-209 and decabromodiphenyl ethane were the most ubiquitous contaminants detected. Copyright © 2013 Elsevier B.V. All rights reserved.
2013-01-01
Background Artemisinin-based fixed dose combination (FDC) products are recommended by World Health Organization (WHO) as a first-line treatment. However, the current artemisinin FDC products, such as β-artemether and lumefantrine, are inherently unstable and require controlled distribution and storage conditions, which are not always available in resource-limited settings. Moreover, quality control is hampered by lack of suitable analytical methods. Thus, there is a need for a rapid and simple, but stability-indicating method for the simultaneous assay of β-artemether and lumefantrine FDC products. Methods Three reversed-phase fused-core HPLC columns (Halo RP-Amide, Halo C18 and Halo Phenyl-hexyl), all thermostated at 30°C, were evaluated. β-artemether and lumefantrine (unstressed and stressed), and reference-related impurities were injected and chromatographic parameters were assessed. Optimal chromatographic parameters were obtained using Halo RP-Amide column and an isocratic mobile phase composed of acetonitrile and 1mM phosphate buffer pH 3.0 (52:48; V/V) at a flow of 1.0 ml/min and 3 μl injection volume. Quantification was performed at 210 nm and 335 nm for β-artemether and for lumefantrine, respectively. In-silico toxicological evaluation of the related impurities was made using Derek Nexus v2.0®. Results Both β-artemether and lumefantrine were separated from each other as well as from the specified and unspecified related impurities including degradants. A complete chromatographic run only took four minutes. Evaluation of the method, including a Plackett-Burman robustness verification within analytical QbD-principles, and real-life samples showed the method is suitable for quantitative assay purposes of both active pharmaceutical ingredients, with a mean recovery relative standard deviation (± RSD) of 99.7 % (± 0.7%) for β-artemether and 99.7 % (± 0.6%) for lumefantrine. All identified β-artemether-related impurities were predicted in Derek Nexus v2.0® to have toxicity risks similar to β-artemether active pharmaceutical ingredient (API) itself. Conclusions A rapid, robust, precise and accurate stability-indicating, quantitative fused-core isocratic HPLC method was developed for simultaneous assay of β-artemether and lumefantrine. This method can be applied in the routine regulatory quality control of FDC products. The in-silico toxicological investigation using Derek Nexus® indicated that the overall toxicity risk for β-artemether-related impurities is comparable to that of β-artemether API. PMID:23631682