A new method for constructing analytic elements for groundwater flow.
NASA Astrophysics Data System (ADS)
Strack, O. D.
2007-12-01
The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.
Analytical evaluation of current starch methods used in the international sugar industry: Part I
USDA-ARS?s Scientific Manuscript database
Several analytical starch methods currently exist in the international sugar industry that are used to prevent or mitigate starch-related processing challenges as well as assess the quality of traded end-products. These methods use simple iodometric chemistry, mostly potato starch standards, and uti...
Mega-Analysis of School Psychology Blueprint for Training and Practice Domains
ERIC Educational Resources Information Center
Burns, Matthew K.; Kanive, Rebecca; Zaslofsky, Anne F.; Parker, David C.
2013-01-01
Meta-analytic research is an effective method for synthesizing existing research and for informing practice and policy. Hattie (2009) suggested that meta-analytic procedures could be employed to existing meta-analyses to create a mega-analysis. The current mega-analysis examined a sample of 47 meta-analyses according to the "School…
A simulation-based evaluation of methods for inferring linear barriers to gene flow
Christopher Blair; Dana E. Weigel; Matthew Balazik; Annika T. H. Keeley; Faith M. Walker; Erin Landguth; Sam Cushman; Melanie Murphy; Lisette Waits; Niko Balkenhol
2012-01-01
Different analytical techniques used on the same data set may lead to different conclusions about the existence and strength of genetic structure. Therefore, reliable interpretation of the results from different methods depends on the efficacy and reliability of different statistical methods. In this paper, we evaluated the performance of multiple analytical methods to...
Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T
2017-01-01
Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
Techniques for Forecasting Air Passenger Traffic
NASA Technical Reports Server (NTRS)
Taneja, N.
1972-01-01
The basic techniques of forecasting the air passenger traffic are outlined. These techniques can be broadly classified into four categories: judgmental, time-series analysis, market analysis and analytical. The differences between these methods exist, in part, due to the degree of formalization of the forecasting procedure. Emphasis is placed on describing the analytical method.
Semi-analytic valuation of stock loans with finite maturity
NASA Astrophysics Data System (ADS)
Lu, Xiaoping; Putri, Endah R. M.
2015-10-01
In this paper we study stock loans of finite maturity with different dividend distributions semi-analytically using the analytical approximation method in Zhu (2006). Stock loan partial differential equations (PDEs) are established under Black-Scholes framework. Laplace transform method is used to solve the PDEs. Optimal exit price and stock loan value are obtained in Laplace space. Values in the original time space are recovered by numerical Laplace inversion. To demonstrate the efficiency and accuracy of our semi-analytic method several examples are presented, the results are compared with those calculated using existing methods. We also present a calculation of fair service fee charged by the lender for different loan parameters.
A simplified analytic form for generation of axisymmetric plasma boundaries
Luce, Timothy C.
2017-02-23
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
A simplified analytic form for generation of axisymmetric plasma boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luce, Timothy C.
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, N.B.; Walker, J.F.
1990-01-01
The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Quantifying construction and demolition waste: an analytical review.
Wu, Zezhou; Yu, Ann T W; Shen, Liyin; Liu, Guiwen
2014-09-01
Quantifying construction and demolition (C&D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C&D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C&D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.
The rise of environmental analytical chemistry as an interdisciplinary activity.
Brown, Richard
2009-07-01
Modern scientific endeavour is increasingly delivered within an interdisciplinary framework. Analytical environmental chemistry is a long-standing example of an interdisciplinary approach to scientific research where value is added by the close cooperation of different disciplines. This editorial piece discusses the rise of environmental analytical chemistry as an interdisciplinary activity and outlines the scope of the Analytical Chemistry and the Environmental Chemistry domains of TheScientificWorldJOURNAL (TSWJ), and the appropriateness of TSWJ's domain format in covering interdisciplinary research. All contributions of new data, methods, case studies, and instrumentation, or new interpretations and developments of existing data, case studies, methods, and instrumentation, relating to analytical and/or environmental chemistry, to the Analytical and Environmental Chemistry domains, are welcome and will be considered equally.
The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234
Mudge, Elizabeth M; Brown, Paula N
2016-01-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823
The Importance of Method Selection in Determining Product Integrity for Nutrition Research.
Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N
2016-03-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.
Snee, Lawrence W.
2002-01-01
40Ar/39Ar geochronology is an experimentally robust and versatile method for constraining time and temperature in geologic processes. The argon method is the most broadly applied in mineral-deposit studies. Standard analytical methods and formulations exist, making the fundamentals of the method well defined. A variety of graphical representations exist for evaluating argon data. A broad range of minerals found in mineral deposits, alteration zones, and host rocks commonly is analyzed to provide age, temporal duration, and thermal conditions for mineralization events and processes. All are discussed in this report. The usefulness of and evolution of the applicability of the method are demonstrated in studies of the Panasqueira, Portugal, tin-tungsten deposit; the Cornubian batholith and associated mineral deposits, southwest England; the Red Mountain intrusive system and associated Urad-Henderson molybdenum deposits; and the Eastern Goldfields Province, Western Australia.
Creating analytically divergence-free velocity fields from grid-based data
NASA Astrophysics Data System (ADS)
Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.
2016-10-01
We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.
Analytical concepts for health management systems of liquid rocket engines
NASA Technical Reports Server (NTRS)
Williams, Richard; Tulpule, Sharayu; Hawman, Michael
1990-01-01
Substantial improvement in health management systems performance can be realized by implementing advanced analytical methods of processing existing liquid rocket engine sensor data. In this paper, such techniques ranging from time series analysis to multisensor pattern recognition to expert systems to fault isolation models are examined and contrasted. The performance of several of these methods is evaluated using data from test firings of the Space Shuttle main engines.
Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors
Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech
2011-01-01
Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935
NASA Astrophysics Data System (ADS)
Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen
2017-04-01
In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
Life cycle management of analytical methods.
Parr, Maria Kristina; Schmidt, Alexander H
2018-01-05
In modern process management, the life cycle concept gains more and more importance. It focusses on the total costs of the process from invest to operation and finally retirement. Also for analytical procedures an increasing interest for this concept exists in the recent years. The life cycle of an analytical method consists of design, development, validation (including instrumental qualification, continuous method performance verification and method transfer) and finally retirement of the method. It appears, that also regulatory bodies have increased their awareness on life cycle management for analytical methods. Thus, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), as well as the United States Pharmacopeial Forum discuss the enrollment of new guidelines that include life cycle management of analytical methods. The US Pharmacopeia (USP) Validation and Verification expert panel already proposed a new General Chapter 〈1220〉 "The Analytical Procedure Lifecycle" for integration into USP. Furthermore, also in the non-regulated environment a growing interest on life cycle management is seen. Quality-by-design based method development results in increased method robustness. Thereby a decreased effort is needed for method performance verification, and post-approval changes as well as minimized risk of method related out-of-specification results. This strongly contributes to reduced costs of the method during its life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.
USDA-ARS?s Scientific Manuscript database
A quantitative answer cannot exist in an analysis without a qualitative component to give enough confidence that the result meets the analytical needs for the analysis (i.e. the result relates to the analyte and not something else). Just as a quantitative method must typically undergo an empirical ...
PARTNERING TO IMPROVE HUMAN EXPOSURE METHODS
Methods development research is an application-driven scientific area that addresses programmatic needs. The goals are to reduce measurement uncertainties, address data gaps, and improve existing analytical procedures for estimating human exposures. Partnerships have been develop...
Modified harmonic balance method for the solution of nonlinear jerk equations
NASA Astrophysics Data System (ADS)
Rahman, M. Saifur; Hasan, A. S. M. Z.
2018-03-01
In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.
Benhammouda, Brahim; Vazquez-Leal, Hector
2016-01-01
This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
Bao, Yijun; Gaylord, Thomas K
2016-11-01
Multifilter phase imaging with partially coherent light (MFPI-PC) is a promising new quantitative phase imaging method. However, the existing MFPI-PC method is based on the paraxial approximation. In the present work, an analytical nonparaxial partially coherent phase optical transfer function is derived. This enables the MFPI-PC to be extended to the realistic nonparaxial case. Simulations over a wide range of test phase objects as well as experimental measurements on a microlens array verify higher levels of imaging accuracy compared to the paraxial method. Unlike the paraxial version, the nonparaxial MFPI-PC with obliquity factor correction exhibits no systematic error. In addition, due to its analytical expression, the increase in computation time compared to the paraxial version is negligible.
Cantrill, Richard C
2008-01-01
Methods of analysis for products of modern biotechnology are required for national and international trade in seeds, grain and food in order to meet the labeling or import/export requirements of different nations and trading blocks. Although many methods were developed by the originators of transgenic events, governments, universities, and testing laboratories, trade is less complicated if there exists a set of international consensus-derived analytical standards. In any analytical situation, multiple methods may exist for testing for the same analyte. These methods may be supported by regional preferences and regulatory requirements. However, tests need to be sensitive enough to determine low levels of these traits in commodity grain for regulatory purposes and also to indicate purity of seeds containing these traits. The International Organization for Standardization (ISO) and its European counterpart have worked to produce a suite of standards through open, balanced and consensus-driven processes. Presently, these standards are approaching the time for their first review. In fact, ISO 21572, the "protein standard" has already been circulated for systematic review. In order to expedite the review and revision of the nucleic acid standards an ISO Technical Specification (ISO/TS 21098) was drafted to set the criteria for the inclusion of precision data from collaborative studies into the annexes of these standards.
Determining a carbohydrate profile for Hansenula polymorpha
NASA Technical Reports Server (NTRS)
Petersen, G. R.
1985-01-01
The determination of the levels of carbohydrates in the yeast Hansenula polymorpha required the development of new analytical procedures. Existing fractionation and analytical methods were adapted to deal with the problems involved with the lysis of whole cells. Using these new procedures, the complete carbohydrate profiles of H. polymorpha and selected mutant strains were determined and shown to correlate favourably with previously published results.
Conservative Analytical Collision Probabilities for Orbital Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2004-01-01
The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.
Conservative Analytical Collision Probability for Design of Orbital Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2004-01-01
The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.
Analytical Methods for Biomass Characterization during Pretreatment and Bioconversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pu, Yunqiao; Meng, Xianzhi; Yoo, Chang Geun
2016-01-01
Lignocellulosic biomass has been introduced as a promising resource for alternative fuels and chemicals because of its abundance and complement for petroleum resources. Biomass is a complex biopolymer and its compositional and structural characteristics largely vary depending on its species as well as growth environments. Because of complexity and variety of biomass, understanding its physicochemical characteristics is a key for effective biomass utilization. Characterization of biomass does not only provide critical information of biomass during pretreatment and bioconversion, but also give valuable insights on how to utilize the biomass. For better understanding biomass characteristics, good grasp and proper selection ofmore » analytical methods are necessary. This chapter introduces existing analytical approaches that are widely employed for biomass characterization during biomass pretreatment and conversion process. Diverse analytical methods using Fourier transform infrared (FTIR) spectroscopy, gel permeation chromatography (GPC), and nuclear magnetic resonance (NMR) spectroscopy for biomass characterization are reviewed. In addition, biomass accessibility methods by analyzing surface properties of biomass are also summarized in this chapter.« less
Quantifying construction and demolition waste: An analytical review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Zezhou; Yu, Ann T.W., E-mail: bsannyu@polyu.edu.hk; Shen, Liyin
2014-09-15
Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and projectmore » levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested.« less
Analysis of modal behavior at frequency cross-over
NASA Astrophysics Data System (ADS)
Costa, Robert N., Jr.
1994-11-01
The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.
Novel asymmetric representation method for solving the higher-order Ginzburg-Landau equation
Wong, Pring; Pang, Lihui; Wu, Ye; Lei, Ming; Liu, Wenjun
2016-01-01
In ultrafast optics, optical pulses are generated to be of shorter pulse duration, which has enormous significance to industrial applications and scientific research. The ultrashort pulse evolution in fiber lasers can be described by the higher-order Ginzburg-Landau (GL) equation. However, analytic soliton solutions for this equation have not been obtained by use of existing methods. In this paper, a novel method is proposed to deal with this equation. The analytic soliton solution is obtained for the first time, and is proved to be stable against amplitude perturbations. Through the split-step Fourier method, the bright soliton solution is studied numerically. The analytic results here may extend the integrable methods, and could be used to study soliton dynamics for some equations in other disciplines. It may also provide the other way to obtain two-soliton solutions for higher-order GL equations. PMID:27086841
Writing analytic element programs in Python.
Bakker, Mark; Kelson, Victor A
2009-01-01
The analytic element method is a mesh-free approach for modeling ground water flow at both the local and the regional scale. With the advent of the Python object-oriented programming language, it has become relatively easy to write analytic element programs. In this article, an introduction is given of the basic principles of the analytic element method and of the Python programming language. A simple, yet flexible, object-oriented design is presented for analytic element codes using multiple inheritance. New types of analytic elements may be added without the need for any changes in the existing part of the code. The presented code may be used to model flow to wells (with either a specified discharge or drawdown) and streams (with a specified head). The code may be extended by any hydrogeologist with a healthy appetite for writing computer code to solve more complicated ground water flow problems. Copyright © 2009 The Author(s). Journal Compilation © 2009 National Ground Water Association.
Methods for geochemical analysis
Baedecker, Philip A.
1987-01-01
The laboratories for analytical chemistry within the Geologic Division of the U.S. Geological Survey are administered by the Office of Mineral Resources. The laboratory analysts provide analytical support to those programs of the Geologic Division that require chemical information and conduct basic research in analytical and geochemical areas vital to the furtherance of Division program goals. Laboratories for research and geochemical analysis are maintained at the three major centers in Reston, Virginia, Denver, Colorado, and Menlo Park, California. The Division has an expertise in a broad spectrum of analytical techniques, and the analytical research is designed to advance the state of the art of existing techniques and to develop new methods of analysis in response to special problems in geochemical analysis. The geochemical research and analytical results are applied to the solution of fundamental geochemical problems relating to the origin of mineral deposits and fossil fuels, as well as to studies relating to the distribution of elements in varied geologic systems, the mechanisms by which they are transported, and their impact on the environment.
A LITERATURE REVIEW OF WIPE SAMPLING METHODS ...
Wipe sampling is an important technique for the estimation of contaminant deposition in buildings, homes, or outdoor surfaces as a source of possible human exposure. Numerousmethods of wipe sampling exist, and each method has its own specification for the type of wipe, wetting solvent, and determinative step to be used, depending upon the contaminant of concern. The objective of this report is to concisely summarize the findings of a literature review that was conducted to identify the state-of-the-art wipe sampling techniques for a target list of compounds. This report describes the methods used to perform the literature review; a brief review of wipe sampling techniques in general; an analysis of physical and chemical properties of each target analyte; an analysis of wipe sampling techniques for the target analyte list; and asummary of the wipe sampling techniques for the target analyte list, including existing data gaps. In general, no overwhelming consensus can be drawn from the current literature on how to collect a wipe sample for the chemical warfare agents, organophosphate pesticides, and other toxic industrial chemicals of interest to this study. Different methods, media, and wetting solvents have been recommended and used by various groups and different studies. For many of the compounds of interest, no specific wipe sampling methodology has been established for their collection. Before a wipe sampling method (or methods) can be established for the co
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-09-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing numbers of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis relating the multiple visualisation challenges to a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-04-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing amounts of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis, relating the multiple visualisation challenges with a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
NASA Astrophysics Data System (ADS)
Nunes, Josane C.
1991-02-01
This work quantifies the changes effected in electron absorbed dose to a soft-tissue equivalent medium when part of this medium is replaced by a material that is not soft -tissue equivalent. That is, heterogeneous dosimetry is addressed. Radionuclides which emit beta particles are the electron sources of primary interest. They are used in brachytherapy and in nuclear medicine: for example, beta -ray applicators made with strontium-90 are employed in certain ophthalmic treatments and iodine-131 is used to test thyroid function. More recent medical procedures under development and which involve beta radionuclides include radioimmunotherapy and radiation synovectomy; the first is a cancer modality and the second deals with the treatment of rheumatoid arthritis. In addition, the possibility of skin surface contamination exists whenever there is handling of radioactive material. Determination of absorbed doses in the examples of the preceding paragraph requires considering boundaries of interfaces. Whilst the Monte Carlo method can be applied to boundary calculations, for routine work such as in clinical situations, or in other circumstances where doses need to be determined quickly, analytical dosimetry would be invaluable. Unfortunately, few analytical methods for boundary beta dosimetry exist. Furthermore, the accuracy of results from both Monte Carlo and analytical methods has to be assessed. Although restricted to one radionuclide, phosphorus -32, the experimental data obtained in this work serve several purposes, one of which is to provide standards against which calculated results can be tested. The experimental data also contribute to the relatively sparse set of published boundary dosimetry data. At the same time, they may be useful in developing analytical boundary dosimetry methodology. The first application of the experimental data is demonstrated. Results from two Monte Carlo codes and two analytical methods, which were developed elsewhere, are compared with experimental data. Monte Carlo results compare satisfactory with experimental results for the boundaries considered. The agreement with experimental results for air interfaces is of particular interest because of discrepancies reported previously by another investigator who used data obtained from a different experimental technique. Results from one of the analytical methods differ significantly from the experimental data obtained here. The second analytical method provided data which approximate experimental results to within 30%. This is encouraging but it remains to be determined whether this method performs equally well for other source energies.
An Analytical Method for Measuring Competence in Project Management
ERIC Educational Resources Information Center
González-Marcos, Ana; Alba-Elías, Fernando; Ordieres-Meré, Joaquín
2016-01-01
The goal of this paper is to present a competence assessment method in project management that is based on participants' performance and value creation. It seeks to close an existing gap in competence assessment in higher education. The proposed method relies on information and communication technology (ICT) tools and combines Project Management…
Mudumbai, Seshadri; Ayer, Ferenc; Stefanko, Jerry
2017-08-01
Health care facilities are implementing analytics platforms as a way to document quality of care. However, few gap analyses exist on platforms specifically designed for patients treated in the Operating Room, Post-Anesthesia Care Unit, and Intensive Care Unit (ICU). As part of a quality improvement effort, we undertook a gap analysis of an existing analytics platform within the Veterans Healthcare Administration. The objectives were to identify themes associated with 1) current clinical use cases and stakeholder needs; 2) information flow and pain points; and 3) recommendations for future analytics development. Methods consisted of semi-structured interviews in 2 phases with a diverse set (n = 9) of support personnel and end users from five facilities across a Veterans Integrated Service Network. Phase 1 identified underlying needs and previous experiences with the analytics platform across various roles and operational responsibilities. Phase 2 validated preliminary feedback, lessons learned, and recommendations for improvement. Emerging themes suggested that the existing system met a small pool of national reporting requirements. However, pain points were identified with accessing data in several information system silos and performing multiple manual validation steps of data content. Notable recommendations included enhancing systems integration to create "one-stop shopping" for data, and developing a capability to perform trends analysis. Our gap analysis suggests that analytics platforms designed for surgical and ICU patients should employ approaches similar to those being used for primary care patients.
Vincent, Ursula; Serano, Federica; von Holst, Christoph
2017-08-01
Carotenoids are used in animal nutrition mainly as sensory additives that favourably affect the colour of fish, birds and food of animal origin. Various analytical methods exist for their quantification in compound feed, reflecting the different physico-chemical characteristics of the carotenoid and the corresponding feed additives. They may be natural products or specific formulations containing the target carotenoids produced by chemical synthesis. In this study a multi-analyte method was developed that can be applied to the determination of all 10 carotenoids currently authorised within the European Union for compound feedingstuffs. The method functions regardless of whether the carotenoids have been added to the compound feed via natural products or specific formulations. It is comprised of three steps: (1) digestion of the feed sample with an enzyme; (2) pressurised liquid extraction; and (3) quantification of the analytes by reversed-phase HPLC coupled to a photodiode array detector in the visible range. The method was single-laboratory validated for poultry and fish feed covering a mass fraction range of the target analyte from 2.5 to 300 mg kg - 1 . The following method performance characteristics were obtained: the recovery rate varied from 82% to 129% and precision expressed as the relative standard deviation of intermediate precision varied from 1.6% to 15%. Based on the acceptable performance obtained in the validation study, the multi-analyte method is considered fit for the intended purpose.
Wind-induced vibration of stay cables : brief
DOT National Transportation Integrated Search
2005-02-01
The objectives of this project were to: : Identify gaps in current knowledge base : Conduct analytical and experimental research in critical areas : Study performance of existing cable-stayed bridges : Study current mitigation methods...
How to conduct External Quality Assessment Schemes for the pre-analytical phase?
Kristensen, Gunn B B; Aakre, Kristin Moberg; Kristoffersen, Ann Helen; Sandberg, Sverre
2014-01-01
In laboratory medicine, several studies have described the most frequent errors in the different phases of the total testing process, and a large proportion of these errors occur in the pre-analytical phase. Schemes for registration of errors and subsequent feedback to the participants have been conducted for decades concerning the analytical phase by External Quality Assessment (EQA) organizations operating in most countries. The aim of the paper is to present an overview of different types of EQA schemes for the pre-analytical phase, and give examples of some existing schemes. So far, very few EQA organizations have focused on the pre-analytical phase, and most EQA organizations do not offer pre-analytical EQA schemes (EQAS). It is more difficult to perform and standardize pre-analytical EQAS and also, accreditation bodies do not ask the laboratories for results from such schemes. However, some ongoing EQA programs for the pre-analytical phase do exist, and some examples are given in this paper. The methods used can be divided into three different types; collecting information about pre-analytical laboratory procedures, circulating real samples to collect information about interferences that might affect the measurement procedure, or register actual laboratory errors and relate these to quality indicators. These three types have different focus and different challenges regarding implementation, and a combination of the three is probably necessary to be able to detect and monitor the wide range of errors occurring in the pre-analytical phase.
Kim, Dalho; Han, Jungho; Choi, Yongwook
2013-01-01
A method using on-line solid-phase microextraction (SPME) on a carbowax-templated fiber followed by liquid chromatography (LC) with ultraviolet (UV) detection was developed for the determination of triclosan in environmental water samples. Along with triclosan, other selected phenolic compounds, bisphenol A, and acidic pharmaceuticals were studied. Previous SPME/LC or stir-bar sorptive extraction/LC-UV for polar analytes showed lack of sensitivity. In this study, the calculated octanol-water distribution coefficient (log D) values of the target analytes at different pH values were used to estimate polarity of the analytes. The lack of sensitivity observed in earlier studies is identified as a lack of desorption by strong polar-polar interactions between analyte and solid-phase. Calculated log D values were useful to understand or predict the interaction between analyte and solid phase. Under the optimized conditions, the method detection limit of selected analytes by using on-line SPME-LC-UV method ranged from 5 to 33 ng L(-1), except for very polar 3-chlorophenol and 2,4-dichlorophenol which was obscured in wastewater samples by an interfering substance. This level of detection represented a remarkable improvement over the conventional existing methods. The on-line SPME-LC-UV method, which did not require derivatization of analytes, was applied to the determination of TCS including phenolic compounds and acidic pharmaceuticals in tap water and river water and municipal wastewater samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tukey, J.W.; Bloomfield, P.
In its most general terms, the work carried out under the contract consists of the development of new data analytic methods and the improvement of existing methods, their implementation on computer, especially minicomputers, and the development of non-statistical, systems-level software to support these activities. The work reported or completed is reviewed. (GHT)
NASA Astrophysics Data System (ADS)
El Boudouti, E. H.; El Hassouani, Y.; Djafari-Rouhani, B.; Aynaou, H.
2007-08-01
We demonstrate analytically and experimentally the existence and behavior of two types of modes in finite size one-dimensional coaxial photonic crystals made of N cells with vanishing magnetic field on both sides. We highlight the existence of N-1 confined modes in each band and one mode by gap associated to either one or the other of the two surfaces surrounding the structure. The latter modes are independent of N . These results generalize our previous findings on the existence of surface modes in two semi-infinite superlattices obtained from the cleavage of an infinite superlattice between two cells. The analytical results are obtained by means of the Green’s function method, whereas the experiments are carried out using coaxial cables in the radio-frequency regime.
Stationary and moving solitons in spin-orbit-coupled spin-1 Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Li, Yu-E.; Xue, Ju-Kui
2018-04-01
We investigate the matter-wave solitons in a spin-orbit-coupled spin-1 Bose-Einstein condensate using a multiscale perturbation method. Beginning with the one-dimensional spin-orbit-coupled threecomponent Gross-Pitaevskii equations, we derive a single nonlinear Schrödinger equation, which allows determination of the analytical soliton solutions of the system. Stationary and moving solitons in the system are derived. In particular, a parameter space for different existing soliton types is provided. It is shown that there exist only dark or bright solitons when the spin-orbit coupling is weak, with the solitons depending on the atomic interactions. However, when the spin-orbit coupling is strong, both dark and bright solitons exist, being determined by the Raman coupling. Our analytical solutions are confirmed by direct numerical simulations.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Sandstrom, Mark W.; Stroppel, Max E.; Foreman, William T.; Schroeder, Michael P.
2001-01-01
A method for the isolation and analysis of 21 parent pesticides and 20 pesticide degradates in natural-water samples is described. Water samples are filtered to remove suspended particulate matter and then are pumped through disposable solid-phase-extraction columns that contain octadecyl-bonded porous silica to extract the analytes. The columns are dried by using nitrogen gas, and adsorbed analytes are eluted with ethyl acetate. Extracted analytes are determined by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring of three characteristic ions. The upper concentration limit is 2 micrograms per liter (?g/L) for most analytes. Single-operator method detection limits in reagent-water samples range from 0.00 1 to 0.057 ?g/L. Validation data also are presented for 14 parent pesticides and 20 degradates that were determined to have greater bias or variability, or shorter holding times than the other compounds. The estimated maximum holding time for analytes in pesticide-grade water before extraction was 4 days. The estimated maximum holding time for analytes after extraction on the dry solid-phase-extraction columns was 7 days. An optional on-site extraction procedure allows for samples to be collected and processed at remote sites where it is difficult to ship samples to the laboratory within the recommended pre-extraction holding time. The method complements existing U.S. Geological Survey Method O-1126-95 (NWQL Schedules 2001 and 2010) by using identical sample preparation and comparable instrument analytical conditions so that sample extracts can be analyzed by either method to expand the range of analytes determined from one water sample.
A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses
Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert
2011-01-01
Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325
Patel, Chirag J
2017-01-01
Mixtures, or combinations and interactions between multiple environmental exposures, are hypothesized to be causally linked with disease and health-related phenotypes. Established and emerging molecular measurement technologies to assay the exposome , the comprehensive battery of exposures encountered from birth to death, promise a new way of identifying mixtures in disease in the epidemiological setting. In this opinion, we describe the analytic complexity and challenges in identifying mixtures associated with phenotype and disease. Existing and emerging machine-learning methods and data analytic approaches (e.g., "environment-wide association studies" [EWASs]), as well as large cohorts may enhance possibilities to identify mixtures of correlated exposures associated with phenotypes; however, the analytic complexity of identifying mixtures is immense. If the exposome concept is realized, new analytical methods and large sample sizes will be required to ascertain how mixtures are associated with disease. The author recommends documenting prevalent correlated exposures and replicated main effects prior to identifying mixtures.
The science of visual analysis at extreme scale
NASA Astrophysics Data System (ADS)
Nowell, Lucy T.
2011-01-01
Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.
NASA Astrophysics Data System (ADS)
Wu, Linqin; Xu, Sheng; Jiang, Dezhi
2015-12-01
Industrial wireless networked control system has been widely used, and how to evaluate the performance of the wireless network is of great significance. In this paper, considering the shortcoming of the existing performance evaluation methods, a comprehensive performance evaluation method of networks multi-indexes fuzzy analytic hierarchy process (MFAHP) combined with the fuzzy mathematics and the traditional analytic hierarchy process (AHP) is presented. The method can overcome that the performance evaluation is not comprehensive and subjective. Experiments show that the method can reflect the network performance of real condition. It has direct guiding role on protocol selection, network cabling, and node setting, and can meet the requirements of different occasions by modifying the underlying parameters.
NASA Astrophysics Data System (ADS)
Vinh, T.
1980-08-01
There is a need for better and more effective lightning protection for transmission and switching substations. In the past, a number of empirical methods were utilized to design systems to protect substations and transmission lines from direct lightning strokes. The need exists for convenient analytical lightning models adequate for engineering usage. In this study, analytical lightning models were developed along with a method for improved analysis of the physical properties of lightning through their use. This method of analysis is based upon the most recent statistical field data. The result is an improved method for predicting the occurrence of sheilding failure and for designing more effective protection for high and extra high voltage substations from direct strokes.
Experimental and CFD evidence of multiple solutions in a naturally ventilated building.
Heiselberg, P; Li, Y; Andersen, A; Bjerre, M; Chen, Z
2004-02-01
This paper considers the existence of multiple solutions to natural ventilation of a simple one-zone building, driven by combined thermal and opposing wind forces. The present analysis is an extension of an earlier analytical study of natural ventilation in a fully mixed building, and includes the effect of thermal stratification. Both computational and experimental investigations were carried out in parallel with an analytical investigation. When flow is dominated by thermal buoyancy, it was found experimentally that there is thermal stratification. When the flow is wind-dominated, the room is fully mixed. Results from all three methods have shown that the hysteresis phenomena exist. Under certain conditions, two different stable steady-state solutions are found to exist by all three methods for the same set of parameters. As shown by both the computational fluid dynamics (CFD) and experimental results, one of the solutions can shift to another when there is a sufficient perturbation. These results have probably provided the strongest evidence so far for the conclusion that multiple states exist in natural ventilation of simple buildings. Different initial conditions in the CFD simulations led to different solutions, suggesting that caution must be taken when adopting the commonly used 'zero initialization'.
Design optimization of piezoresistive cantilevers for force sensing in air and water
Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.
2009-01-01
Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512
Stakeholder prioritization of zoonoses in Japan with analytic hierarchy process method.
Kadohira, M; Hill, G; Yoshizaki, R; Ota, S; Yoshikawa, Y
2015-05-01
There exists an urgent need to develop iterative risk assessment strategies of zoonotic diseases. The aim of this study is to develop a method of prioritizing 98 zoonoses derived from animal pathogens in Japan and to involve four major groups of stakeholders: researchers, physicians, public health officials, and citizens. We used a combination of risk profiling and analytic hierarchy process (AHP). Profiling risk was accomplished with semi-quantitative analysis of existing public health data. AHP data collection was performed by administering questionnaires to the four stakeholder groups. Results showed that researchers and public health officials focused on case fatality as the chief important factor, while physicians and citizens placed more weight on diagnosis and prevention, respectively. Most of the six top-ranked diseases were similar among all stakeholders. Transmissible spongiform encephalopathy, severe acute respiratory syndrome, and Ebola fever were ranked first, second, and third, respectively.
Borai, Anwar; Ichihara, Kiyoshi; Al Masaud, Abdulaziz; Tamimi, Waleed; Bahijri, Suhad; Armbuster, David; Bawazeer, Ali; Nawajha, Mustafa; Otaibi, Nawaf; Khalil, Haitham; Kawano, Reo; Kaddam, Ibrahim; Abdelaal, Mohamed
2016-05-01
This study is a part of the IFCC-global study to derive reference intervals (RIs) for 28 chemistry analytes in Saudis. Healthy individuals (n=826) aged ≥18 years were recruited using the global study protocol. All specimens were measured using an Architect analyzer. RIs were derived by both parametric and non-parametric methods for comparative purpose. The need for secondary exclusion of reference values based on latent abnormal values exclusion (LAVE) method was examined. The magnitude of variation attributable to gender, ages and regions was calculated by the standard deviation ratio (SDR). Sources of variations: age, BMI, physical exercise and smoking levels were investigated by using the multiple regression analysis. SDRs for gender, age and regional differences were significant for 14, 8 and 2 analytes, respectively. BMI-related changes in test results were noted conspicuously for CRP. For some metabolic related parameters the ranges of RIs by non-parametric method were wider than by the parametric method and RIs derived using the LAVE method were significantly different than those without it. RIs were derived with and without gender partition (BMI, drugs and supplements were considered). RIs applicable to Saudis were established for the majority of chemistry analytes, whereas gender, regional and age RI partitioning was required for some analytes. The elevated upper limits of metabolic analytes reflects the existence of high prevalence of metabolic syndrome in Saudi population.
Recent advances in immunosensor for narcotic drug detection
Gandhi, Sonu; Suman, Pankaj; Kumar, Ashok; Sharma, Prince; Capalash, Neena; Suri, C. Raman
2015-01-01
Introduction: Immunosensor for illicit drugs have gained immense interest and have found several applications for drug abuse monitoring. This technology has offered a low cost detection of narcotics; thereby, providing a confirmatory platform to compliment the existing analytical methods. Methods: In this minireview, we define the basic concept of transducer for immunosensor development that utilizes antibodies and low molecular mass hapten (opiate) molecules. Results: This article emphasizes on recent advances in immunoanalytical techniques for monitoring of opiate drugs. Our results demonstrate that high quality antibodies can be used for immunosensor development against target analyte with greater sensitivity, specificity and precision than other available analytical methods. Conclusion: In this review we highlight the fundamentals of different transducer technologies and its applications for immunosensor development currently being developed in our laboratory using rapid screening via immunochromatographic kit, label free optical detection via enzyme, fluorescence, gold nanoparticles and carbon nanotubes based immunosensing for sensitive and specific monitoring of opiates. PMID:26929925
Analytical Methods of Decoupling the Automotive Engine Torque Roll Axis
NASA Astrophysics Data System (ADS)
JEONG, TAESEOK; SINGH, RAJENDRA
2000-06-01
This paper analytically examines the multi-dimensional mounting schemes of an automotive engine-gearbox system when excited by oscillating torques. In particular, the issue of torque roll axis decoupling is analyzed in significant detail since it is poorly understood. New dynamic decoupling axioms are presented an d compared with the conventional elastic axis mounting and focalization methods. A linear time-invariant system assumption is made in addition to a proportionally damped system. Only rigid-body modes of the powertrain are considered and the chassis elements are assumed to be rigid. Several simplified physical systems are considered and new closed-form solutions for symmetric and asymmetric engine-mounting systems are developed. These clearly explain the design concepts for the 4-point mounting scheme. Our analytical solutions match with the existing design formulations that are only applicable to symmetric geometries. Spectra for all six rigid-body motions are predicted using the alternate decoupling methods and the closed-form solutions are verified. Also, our method is validated by comparing modal solutions with prior experimental and analytical studies. Parametric design studies are carried out to illustrate the methodology. Chief contributions of this research include the development of new or refined analytical models and closed-form solutions along with improved design strategies for the torque roll axis decoupling.
Krishna P. Poudel; Temesgen. Hailemariam
2015-01-01
Performance of three groups of methods to estimate total and/or component aboveground biomass was evaluated using the data collected from destructively sampled trees in different parts of Oregon. First group of methods used analytical approach to estimate total and component biomass using existing equations, and produced biased estimates for our dataset. The second...
Computing the Evans function via solving a linear boundary value ODE
NASA Astrophysics Data System (ADS)
Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn
2015-11-01
Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, Norwood B.; Walker, J.F.
1992-01-01
Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Dynamics of a parametrically excited simple pendulum
NASA Astrophysics Data System (ADS)
Depetri, Gabriela I.; Pereira, Felipe A. C.; Marin, Boris; Baptista, Murilo S.; Sartorelli, J. C.
2018-03-01
The dynamics of a parametric simple pendulum submitted to an arbitrary angle of excitation ϕ was investigated experimentally by simulations and analytically. Analytical calculations for the loci of saddle-node bifurcations corresponding to the creation of resonant orbits were performed by applying Melnikov's method. However, this powerful perturbative method cannot be used to predict the existence of odd resonances for a vertical excitation within first order corrections. Yet, we showed that period-3 resonances indeed exist in such a configuration. Two degenerate attractors of different phases, associated with the same loci of saddle-node bifurcations in parameter space, are reported. For tilted excitation, the degeneracy is broken due to an extra torque, which was confirmed by the calculation of two distinct loci of saddle-node bifurcations for each attractor. This behavior persists up to ϕ≈7 π/180 , and for inclinations larger than this, only one attractor is observed. Bifurcation diagrams were constructed experimentally for ϕ=π/8 to demonstrate the existence of self-excited resonances (periods smaller than three) and hidden oscillations (for periods greater than three).
Dynamics of a parametrically excited simple pendulum.
Depetri, Gabriela I; Pereira, Felipe A C; Marin, Boris; Baptista, Murilo S; Sartorelli, J C
2018-03-01
The dynamics of a parametric simple pendulum submitted to an arbitrary angle of excitation ϕ was investigated experimentally by simulations and analytically. Analytical calculations for the loci of saddle-node bifurcations corresponding to the creation of resonant orbits were performed by applying Melnikov's method. However, this powerful perturbative method cannot be used to predict the existence of odd resonances for a vertical excitation within first order corrections. Yet, we showed that period-3 resonances indeed exist in such a configuration. Two degenerate attractors of different phases, associated with the same loci of saddle-node bifurcations in parameter space, are reported. For tilted excitation, the degeneracy is broken due to an extra torque, which was confirmed by the calculation of two distinct loci of saddle-node bifurcations for each attractor. This behavior persists up to ϕ≈7π/180, and for inclinations larger than this, only one attractor is observed. Bifurcation diagrams were constructed experimentally for ϕ=π/8 to demonstrate the existence of self-excited resonances (periods smaller than three) and hidden oscillations (for periods greater than three).
NASA Astrophysics Data System (ADS)
Gen, Masao; Kakuta, Hideo; Kamimoto, Yoshihito; Wuled Lenggoro, I.
2011-06-01
A detection method based on the surface-enhanced Raman spectroscopy (SERS)-active substrate derived from aerosol nanoparticles and a colloidal suspension for detecting organic molecules of a model analyte (a pesticide) is proposed. This approach can detect the molecules of the derived from its solution with the concentration levels of ppb. For substrate fabrication, a gas-phase method is used to directly deposit Ag nanoparticles on to a silicon substrate having pyramidal structures. By mixing the target analyte with a suspension of Ag colloids purchased in advance, clotianidin analyte on Ag colloid can exist in junctions of co-aggregated Ag colloids. Using (i) a nanostructured substrate made from aerosol nanoparticles and (ii) colloidal suspension can increase the number of activity spots.
Prioritizing pesticide compounds for analytical methods development
Norman, Julia E.; Kuivila, Kathryn; Nowell, Lisa H.
2012-01-01
The U.S. Geological Survey (USGS) has a periodic need to re-evaluate pesticide compounds in terms of priorities for inclusion in monitoring and studies and, thus, must also assess the current analytical capabilities for pesticide detection. To meet this need, a strategy has been developed to prioritize pesticides and degradates for analytical methods development. Screening procedures were developed to separately prioritize pesticide compounds in water and sediment. The procedures evaluate pesticide compounds in existing USGS analytical methods for water and sediment and compounds for which recent agricultural-use information was available. Measured occurrence (detection frequency and concentrations) in water and sediment, predicted concentrations in water and predicted likelihood of occurrence in sediment, potential toxicity to aquatic life or humans, and priorities of other agencies or organizations, regulatory or otherwise, were considered. Several existing strategies for prioritizing chemicals for various purposes were reviewed, including those that identify and prioritize persistent, bioaccumulative, and toxic compounds, and those that determine candidates for future regulation of drinking-water contaminants. The systematic procedures developed and used in this study rely on concepts common to many previously established strategies. The evaluation of pesticide compounds resulted in the classification of compounds into three groups: Tier 1 for high priority compounds, Tier 2 for moderate priority compounds, and Tier 3 for low priority compounds. For water, a total of 247 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods for monitoring and studies. Of these, about three-quarters are included in some USGS analytical method; however, many of these compounds are included on research methods that are expensive and for which there are few data on environmental samples. The remaining quarter of Tier 1 compounds are high priority as new analytes. The objective for analytical methods development is to design an integrated analytical strategy that includes as many of the Tier 1 pesticide compounds as possible in a relatively few, cost-effective methods. More than 60 percent of the Tier 1 compounds are high priority because they are anticipated to be present at concentrations approaching levels that could be of concern to human health or aquatic life in surface water or groundwater. An additional 17 percent of Tier 1 compounds were frequently detected in monitoring studies, but either were not measured at levels potentially relevant to humans or aquatic organisms, or do not have benchmarks available with which to compare concentrations. The remaining 21 percent are pesticide degradates that were included because their parent pesticides were in Tier 1. Tier 1 pesticide compounds for water span all major pesticide use groups and a diverse range of chemical classes, with herbicides and their degradates composing half of compounds. Many of the high priority pesticide compounds also are in several national regulatory programs for water, including those that are regulated in drinking water by the U.S. Environmental Protection Agency under the Safe Drinking Water Act and those that are on the latest Contaminant Candidate List. For sediment, a total of 175 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods available for monitoring and studies. More than 60 percent of these compounds are included in some USGS analytical method; however, some are spread across several research methods that are expensive to perform, and monitoring data are not extensive for many compounds. The remaining Tier 1 compounds for sediment are high priority as new analytes. The objective for analytical methods development for sediment is to enhance an existing analytical method that currently includes nearly half of the pesticide compounds in Tier 1 by adding as many additional Tier 1 compounds as are analytically compatible. About 35 percent of the Tier 1 compounds for sediment are high priority on the basis of measured occurrence. A total of 74 compounds, or 42 percent, are high priority on the basis of predicted likelihood of occurrence according to physical-chemical properties, and either have potential toxicity to aquatic life, high pesticide useage, or both. The remaining 22 percent of Tier 1 pesticide compounds were either degradates of Tier 1 parent compounds or included for other reasons. As with water, the Tier 1 pesticide compounds for sediment are distributed across the major pesticide-use groups; insecticides and their degradates are the largest fraction, making up 45 percent of Tier 1. In contrast to water, organochlorines, at 17 percent, are the largest chemical class for Tier 1 in sediment, which is to be expected because there is continued widespread detection in sediments of persistent organochlorine pesticides and their degradates at concentrations high enough for potential effects on aquatic life. Compared to water, there are fewer available benchmarks with which to compare contaminant concentrations in sediment, but a total of 19 Tier 1 compounds have at least one sediment benchmark or screening value for aquatic organisms. Of the 175 compounds in Tier 1, 77 percent have high aquatic-life toxicity, as defined for this process. This evaluation of pesticides and degradates resulted in two lists of compounds that are priorities for USGS analytical methods development, one for water and one for sediment. These lists will be used as the basis for redesigning and enhancing USGS analytical capabilities for pesticides in order to capture as many high-priority pesticide compounds as possible using an economically feasible approach.
Instrumentation development for drug detection on the breath
DOT National Transportation Integrated Search
1972-09-01
Based on a survey of candidate analytical methods, mass spectrometry was identified as a promising technique for drug detection on the breath. To demonstrate its capabilities, an existing laboratory mass spectrometer was modified by the addition of a...
Analytical N beam position monitor method
NASA Astrophysics Data System (ADS)
Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.
2017-11-01
Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.
Remane, Daniela; Grunwald, Soeren; Hoeke, Henrike; Mueller, Andrea; Roeder, Stefan; von Bergen, Martin; Wissenbach, Dirk K
2015-08-15
During the last decades exposure sciences and epidemiological studies attracts more attention to unravel the mechanisms for the development of chronic diseases. According to this an existing HPLC-DAD method for determination of creatinine in urine samples was expended for seven analytes and validated. Creatinine, uric acid, homovanillic acid, niacinamide, hippuric acid, indole-3-acetic acid, and 2-methylhippuric acid were separated by gradient elution (formate buffer/methanol) using an Eclipse Plus C18 Rapid Resolution column (4.6mm×100mm). No interfering signals were detected in mobile phase. After injection of blank urine samples signals for the endogenous compounds but no interferences were detected. All analytes were linear in the selected calibration range and a non weighted calibration model was chosen. Bias, intra-day and inter-day precision for all analytes were below 20% for quality control (QC) low and below 10% for QC medium and high. The limits of quantification in mobile phase were in line with reported reference values but had to be adjusted in urine for homovanillic acid (45mg/L), niacinamide 58.5(mg/L), and indole-3-acetic acid (63mg/L). Comparison of creatinine data obtained by the existing method with those of the developed method showing differences from -120mg/L to +110mg/L with a mean of differences of 29.0mg/L for 50 authentic urine samples. Analyzing 50 authentic urine samples, uric acid, creatinine, hippuric acid, and 2-methylhippuric acid were detected in (nearly) all samples. However, homovanillic acid was detected in 40%, niacinamide in 4% and indole-3-acetic acid was never detected within the selected samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Analytic theory of photoacoustic wave generation from a spheroidal droplet.
Li, Yong; Fang, Hui; Min, Changjun; Yuan, Xiaocong
2014-08-25
In this paper, we develop an analytic theory for describing the photoacoustic wave generation from a spheroidal droplet and derive the first complete analytic solution. Our derivation is based on solving the photoacoustic Helmholtz equation in spheroidal coordinates with the separation-of-variables method. As the verification, besides carrying out the asymptotic analyses which recover the standard solutions for a sphere, an infinite cylinder and an infinite layer, we also confirm that the partial transmission and reflection model previously demonstrated for these three geometries still stands. We expect that this analytic solution will find broad practical uses in interpreting experiment results, considering that its building blocks, the spheroidal wave functions (SWFs), can be numerically calculated by the existing computer programs.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
ERIC Educational Resources Information Center
White, Charles E., Jr.
The purpose of this study was to develop and implement a hypertext documentation system in an industrial laboratory and to evaluate its usefulness by participative observation and a questionnaire. Existing word-processing test method documentation was converted directly into a hypertext format or "hyperdocument." The hyperdocument was designed and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cong, Yongzheng; Rausch, Sarah J.; Geng, Tao
2014-10-27
Here we show that a closed pneumatic microvalve on a PDMS chip can serve as a semipermeable membrane under an applied potential, enabling current to pass through while blocking the passage of charged analytes. Enrichment of both anionic and cationic species has been demonstrated, and concentration factors of ~70 have been achieved in just 8 s. Once analytes are concentrated, the valve is briefly opened and the sample is hydrodynamically injected onto an integrated microchip or capillary electrophoresis (CE) column. In contrast to existing preconcentration approaches, the membrane-based method described here enables both rapid analyte concentration as well as highmore » resolution separations.« less
Recent Progresses in Nanobiosensing for Food Safety Analysis
Yang, Tao; Huang, Huifen; Zhu, Fang; Lin, Qinlu; Zhang, Lin; Liu, Junwen
2016-01-01
With increasing adulteration, food safety analysis has become an important research field. Nanomaterials-based biosensing holds great potential in designing highly sensitive and selective detection strategies necessary for food safety analysis. This review summarizes various function types of nanomaterials, the methods of functionalization of nanomaterials, and recent (2014–present) progress in the design and development of nanobiosensing for the detection of food contaminants including pathogens, toxins, pesticides, antibiotics, metal contaminants, and other analytes, which are sub-classified according to various recognition methods of each analyte. The existing shortcomings and future perspectives of the rapidly growing field of nanobiosensing addressing food safety issues are also discussed briefly. PMID:27447636
Recent Progresses in Nanobiosensing for Food Safety Analysis.
Yang, Tao; Huang, Huifen; Zhu, Fang; Lin, Qinlu; Zhang, Lin; Liu, Junwen
2016-07-19
With increasing adulteration, food safety analysis has become an important research field. Nanomaterials-based biosensing holds great potential in designing highly sensitive and selective detection strategies necessary for food safety analysis. This review summarizes various function types of nanomaterials, the methods of functionalization of nanomaterials, and recent (2014-present) progress in the design and development of nanobiosensing for the detection of food contaminants including pathogens, toxins, pesticides, antibiotics, metal contaminants, and other analytes, which are sub-classified according to various recognition methods of each analyte. The existing shortcomings and future perspectives of the rapidly growing field of nanobiosensing addressing food safety issues are also discussed briefly.
Davis, Mark D; Wade, Erin L; Restrepo, Paula R; Roman-Esteva, William; Bravo, Roberto; Kuklenyik, Peter; Calafat, Antonia M
2013-06-15
Organophosphate and pyrethroid insecticides and phenoxyacetic acid herbicides represent important classes of pesticides applied in commercial and residential settings. Interest in assessing the extent of human exposure to these pesticides exists because of their widespread use and their potential adverse health effects. An analytical method for measuring 12 biomarkers of several of these pesticides in urine has been developed. The target analytes were extracted from one milliliter of urine by a semi-automated solid phase extraction technique, separated from each other and from other urinary biomolecules by reversed-phase high performance liquid chromatography, and detected using tandem mass spectrometry with isotope dilution quantitation. This method can be used to measure all the target analytes in one injection with similar repeatability and detection limits of previous methods which required more than one injection. Each step of the procedure was optimized to produce a robust, reproducible, accurate, precise and efficient method. The required selectivity and sensitivity for trace-level analysis (e.g., limits of detection below 0.5ng/mL) was achieved using a narrow diameter analytical column, higher than unit mass resolution for certain analytes, and stable isotope labeled internal standards. The method was applied to the analysis of 55 samples collected from adult anonymous donors with no known exposure to the target pesticides. This efficient and cost-effective method is adequate to handle the large number of samples required for national biomonitoring surveys. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Wahls, Richard A.
1990-01-01
The method presented is designed to improve the accuracy and computational efficiency of existing numerical methods for the solution of flows with compressible turbulent boundary layers. A compressible defect stream function formulation of the governing equations assuming an arbitrary turbulence model is derived. This formulation is advantageous because it has a constrained zero-order approximation with respect to the wall shear stress and the tangential momentum equation has a first integral. Previous problems with this type of formulation near the wall are eliminated by using empirically based analytic expressions to define the flow near the wall. The van Driest law of the wall for velocity and the modified Crocco temperature-velocity relationship are used. The associated compressible law of the wake is determined and it extends the valid range of the analytical expressions beyond the logarithmic region of the boundary layer. The need for an inner-region eddy viscosity model is completely avoided. The near-wall analytic expressions are patched to numerically computed outer region solutions at a point determined during the computation. A new boundary condition on the normal derivative of the tangential velocity at the surface is presented; this condition replaces the no-slip condition and enables numerical integration to the surface with a relatively coarse grid using only an outer region turbulence model. The method was evaluated for incompressible and compressible equilibrium flows and was implemented into an existing Navier-Stokes code using the assumption of local equilibrium flow with respect to the patching. The method has proven to be accurate and efficient.
ERIC Educational Resources Information Center
Coulson, Dale M.; And Others
The purpose of this study is to evaluate existing manual methods for analyzing asbestos, beryllium, lead, cadmium, selenium, and mercury, and from this evaluation to provide the best and most practical set of analytical methods for measuring emissions of these elements from stationary sources. The work in this study was divided into two phases.…
A general method for computing the total solar radiation force on complex spacecraft structures
NASA Technical Reports Server (NTRS)
Chan, F. K.
1981-01-01
The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721
Analytical methods for determination of mycotoxins: a review.
Turner, Nicholas W; Subrahmanyam, Sreenath; Piletsky, Sergey A
2009-01-26
Mycotoxins are small (MW approximately 700), toxic chemical products formed as secondary metabolites by a few fungal species that readily colonise crops and contaminate them with toxins in the field or after harvest. Ochratoxins and Aflatoxins are mycotoxins of major significance and hence there has been significant research on broad range of analytical and detection techniques that could be useful and practical. Due to the variety of structures of these toxins, it is impossible to use one standard technique for analysis and/or detection. Practical requirements for high-sensitivity analysis and the need for a specialist laboratory setting create challenges for routine analysis. Several existing analytical techniques, which offer flexible and broad-based methods of analysis and in some cases detection, have been discussed in this manuscript. There are a number of methods used, of which many are lab-based, but to our knowledge there seems to be no single technique that stands out above the rest, although analytical liquid chromatography, commonly linked with mass spectroscopy is likely to be popular. This review manuscript discusses (a) sample pre-treatment methods such as liquid-liquid extraction (LLE), supercritical fluid extraction (SFE), solid phase extraction (SPE), (b) separation methods such as (TLC), high performance liquid chromatography (HPLC), gas chromatography (GC), and capillary electrophoresis (CE) and (c) others such as ELISA. Further currents trends, advantages and disadvantages and future prospects of these methods have been discussed.
Historically, risk assessment has relied upon toxicological data to obtain hazard-based reference levels, which are subsequently compared to exposure estimates to determine whether an unacceptable risk to public health may exist. Recent advances in analytical methods, biomarker ...
Multiplexed biosensors for detection of mycotoxins
USDA-ARS?s Scientific Manuscript database
As analytical methods have improved it has become apparent that mycotoxins exist in many forms within a commodity or food. For the established toxins there has been increased interest in the presence of metabolites that might also harbor toxicity. These include biosynthetic precursors as well as pro...
Existence of a coupled system of fractional differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Rabha W.; Siri, Zailan
2015-10-22
We manage the existence and uniqueness of a fractional coupled system containing Schrödinger equations. Such a system appears in quantum mechanics. We confirm that the fractional system under consideration admits a global solution in appropriate functional spaces. The solution is shown to be unique. The method is based on analytic technique of the fixed point theory. The fractional differential operator is considered from the virtue of the Riemann-Liouville differential operator.
Development and application of a gradient method for solving differential games
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Montgomery, R. C.
1971-01-01
A technique for solving n-dimensional games is developed and applied to two pursuit-evasion games. The first is a two-dimensional game similar to the homicidal chauffeur but modified to resemble an airplane-helicopter engagement. The second is a five-dimensional game of two airplanes at constant altitude and with thrust and turning controls. The performance function to be optimized by the pursuer and evader was the distance between the evader and a given target point in front of the pursuer. The analytic solution to the first game reveals that both unique and nonunique solutions exist. A comparison between the gradient results and the analytic solution shows a dependence on the nominal controls in regions where nonunique solutions exist. In the unique solution region, the results from the two methods agree closely. The results for the five-dimensional two-airplane game are also shown to be dependent on the nominal controls selected and indicate that initial conditions are in a region of nonunique solutions.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prentice, H. J.; Proud, W. G.
2006-07-28
A technique has been developed to determine experimentally the three-dimensional displacement field on the rear surface of a dynamically deforming plate. The technique combines speckle analysis with stereoscopy, using a modified angular-lens method: this incorporates split-frame photography and a simple method by which the effective lens separation can be adjusted and calibrated in situ. Whilst several analytical models exist to predict deformation in extended or semi-infinite targets, the non-trivial nature of the wave interactions complicates the generation and development of analytical models for targets of finite depth. By interrogating specimens experimentally to acquire three-dimensional strain data points, both analytical andmore » numerical model predictions can be verified more rigorously. The technique is applied to the quasi-static deformation of a rubber sheet and dynamically to Mild Steel sheets of various thicknesses.« less
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1987-01-01
This document discusses the determination of caustic surfaces in terms of rays, reflectors, and wavefronts. Analytical caustics are obtained as a family of lines, a set of points, and several types of equations for geometries encountered in optics and microwave applications. Standard methods of differential geometry are applied under different approaches: directly to reflector surfaces, and alternatively, to wavefronts, to obtain analytical caustics of two sheets or branches. Gauss/Seidel aberrations are introduced into the wavefront approach, forcing the retention of all three coefficients of both the first- and the second-fundamental forms of differential geometry. An existing method for obtaining caustic surfaces through exploitation of the singularities in flux density is examined, and several constant-intensity contour maps are developed using only the intrinsic Gaussian, mean, and normal curvatures of the reflector. Numerous references are provided for extending the material of the present document to the morphologies of caustics and their associated diffraction patterns.
High-Contrast Gratings based Spoof Surface Plasmons
NASA Astrophysics Data System (ADS)
Li, Zhuo; Liu, Liangliang; Xu, Bingzheng; Ning, Pingping; Chen, Chen; Xu, Jia; Chen, Xinlei; Gu, Changqing; Qing, Quan
2016-02-01
In this work, we explore the existence of spoof surface plasmons (SSPs) supported by deep-subwavelength high-contrast gratings (HCGs) on a perfect electric conductor plane. The dispersion relation of the HCGs-based SSPs is derived analyt- ically by combining multimode network theory with rigorous mode matching method, which has nearly the same form with and can be degenerated into that of the SSPs arising from deep-subwavelength metallic gratings (MGs). Numerical simula- tions validate the analytical dispersion relation and an effective medium approximation is also presented to obtain the same analytical dispersion formula. This work sets up a unified theoretical framework for SSPs and opens up new vistas in surface plasmon optics.
Maghrabi, Mufeed; Al-Abdullah, Tariq; Khattari, Ziad
2018-03-24
The two heating rates method (originally developed for first-order glow peaks) was used for the first time to evaluate the activation energy (E) from glow peaks obeying mixed-order (MO) kinetics. The derived expression for E has an insignificant additional term (on the scale of a few meV) when compared with the first-order case. Hence, the original expression for E using the two heating rates method can be used with excellent accuracy in the case of MO glow peaks. In addition, we derived a simple analytical expression for the MO parameter. The present procedure has the advantage that the MO parameter can now be evaluated using analytical expression instead of using the graphical representation between the geometrical factor and the MO parameter as given by the existing peak shape methods. The applicability of the derived expressions for real samples was demonstrated for the glow curve of Li 2 B 4 O 7 :Mn single crystal. The obtained parameters compare very well with those obtained by glow curve fitting and with the available published data.
Selection and authentication of botanical materials for the development of analytical methods.
Applequist, Wendy L; Miller, James S
2013-05-01
Herbal products, for example botanical dietary supplements, are widely used. Analytical methods are needed to ensure that botanical ingredients used in commercial products are correctly identified and that research materials are of adequate quality and are sufficiently characterized to enable research to be interpreted and replicated. Adulteration of botanical material in commerce is common for some species. The development of analytical methods for specific botanicals, and accurate reporting of research results, depend critically on correct identification of test materials. Conscious efforts must therefore be made to ensure that the botanical identity of test materials is rigorously confirmed and documented through preservation of vouchers, and that their geographic origin and handling are appropriate. Use of material with an associated herbarium voucher that can be botanically identified is always ideal. Indirect methods of authenticating bulk material in commerce, for example use of organoleptic, anatomical, chemical, or molecular characteristics, are not always acceptable for the chemist's purposes. Familiarity with botanical and pharmacognostic literature is necessary to determine what potential adulterants exist and how they may be distinguished.
NASA Astrophysics Data System (ADS)
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
On analyticity of linear waves scattered by a layered medium
NASA Astrophysics Data System (ADS)
Nicholls, David P.
2017-10-01
The scattering of linear waves by periodic structures is a crucial phenomena in many branches of applied physics and engineering. In this paper we establish rigorous analytic results necessary for the proper numerical analysis of a class of High-Order Perturbation of Surfaces methods for simulating such waves. More specifically, we prove a theorem on existence and uniqueness of solutions to a system of partial differential equations which model the interaction of linear waves with a multiply layered periodic structure in three dimensions. This result provides hypotheses under which a rigorous numerical analysis could be conducted for recent generalizations to the methods of Operator Expansions, Field Expansions, and Transformed Field Expansions.
A general statistical test for correlations in a finite-length time series.
Hanson, Jeffery A; Yang, Haw
2008-06-07
The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.
Analyzing chromatographic data using multilevel modeling.
Wiczling, Paweł
2018-06-01
It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.
The Water-Energy-Food Nexus: A systematic review of methods for nexus assessment
NASA Astrophysics Data System (ADS)
Albrecht, Tamee R.; Crootof, Arica; Scott, Christopher A.
2018-04-01
The water-energy-food (WEF) nexus is rapidly expanding in scholarly literature and policy settings as a novel way to address complex resource and development challenges. The nexus approach aims to identify tradeoffs and synergies of water, energy, and food systems, internalize social and environmental impacts, and guide development of cross-sectoral policies. However, while the WEF nexus offers a promising conceptual approach, the use of WEF nexus methods to systematically evaluate water, energy, and food interlinkages or support development of socially and politically-relevant resource policies has been limited. This paper reviews WEF nexus methods to provide a knowledge base of existing approaches and promote further development of analytical methods that align with nexus thinking. The systematic review of 245 journal articles and book chapters reveals that (a) use of specific and reproducible methods for nexus assessment is uncommon (less than one-third); (b) nexus methods frequently fall short of capturing interactions among water, energy, and food—the very linkages they conceptually purport to address; (c) assessments strongly favor quantitative approaches (nearly three-quarters); (d) use of social science methods is limited (approximately one-quarter); and (e) many nexus methods are confined to disciplinary silos—only about one-quarter combine methods from diverse disciplines and less than one-fifth utilize both quantitative and qualitative approaches. To help overcome these limitations, we derive four key features of nexus analytical tools and methods—innovation, context, collaboration, and implementation—from the literature that reflect WEF nexus thinking. By evaluating existing nexus analytical approaches based on these features, we highlight 18 studies that demonstrate promising advances to guide future research. This paper finds that to address complex resource and development challenges, mixed-methods and transdisciplinary approaches are needed that incorporate social and political dimensions of water, energy, and food; utilize multiple and interdisciplinary approaches; and engage stakeholders and decision-makers.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Prediction of thermal cycling induced matrix cracking
NASA Technical Reports Server (NTRS)
Mcmanus, Hugh L.
1992-01-01
Thermal fatigue has been observed to cause matrix cracking in laminated composite materials. A method is presented to predict transverse matrix cracks in composite laminates subjected to cyclic thermal load. Shear lag stress approximations and a simple energy-based fracture criteria are used to predict crack densities as a function of temperature. Prediction of crack densities as a function of thermal cycling is accomplished by assuming that fatigue degrades the material's inherent resistance to cracking. The method is implemented as a computer program. A simple experiment provides data on progressive cracking of a laminate with decreasing temperature. Existing data on thermal fatigue is also used. Correlations of the analytical predictions to the data are very good. A parametric study using the analytical method is presented which provides insight into material behavior under cyclical thermal loads.
Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud
2017-01-01
Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and materials related to food, e.g. food packaging and biological samples following metabolization of food. The challenge for the analytical sciences is that the characterization of NMs requires chemical as well as physical information. This article offers a comprehensive analysis of methods available for the detection and characterization of NMs in food and related products. Special attention was paid to the crucial role of sample preparation methods since these have been partially neglected in the scientific literature so far. The currently available instrumental methods are grouped as fractionation, counting and ensemble methods, and their advantages and limitations are discussed. We conclude that much progress has been made over the last 5 years but that many challenges still exist. Future perspectives and priority research needs are pointed out. Graphical Abstract Two possible analytical strategies for the sizing and quantification of Nanoparticles: Asymmetric Flow Field-Flow Fractionation with multiple detectors (allows the determination of true size and mass-based particle size distribution); Single Particle Inductively Coupled Plasma Mass Spectrometry (allows the determination of a spherical equivalent diameter of the particle and a number-based particle size distribution).
NASA Astrophysics Data System (ADS)
O'Neill, N. T.
2010-10-01
It is pointed out that the graphical, aerosol classification method of Gobbi et al. (2007) can be interpreted as a manifestation of fundamental analytical relations whose existance depends on the simple assumption that the optical effects of aerosols are essentially bimodal in nature. The families of contour lines in their "Ada" curvature space are essentially empirical and discretized illustrations of analytical parabolic forms in (α, α') space (the space formed by the continuously differentiable Angstrom exponent and its spectral derivative).
Elastic properties of rigid fiber-reinforced composites
NASA Astrophysics Data System (ADS)
Chen, J.; Thorpe, M. F.; Davis, L. C.
1995-05-01
We study the elastic properties of rigid fiber-reinforced composites with perfect bonding between fibers and matrix, and also with sliding boundary conditions. In the dilute region, there exists an exact analytical solution. Around the rigidity threshold we find the elastic moduli and Poisson's ratio by decomposing the deformation into a compression mode and a rotation mode. For perfect bonding, both modes are important, whereas only the compression mode is operative for sliding boundary conditions. We employ the digital-image-based method and a finite element analysis to perform computer simulations which confirm our analytical predictions.
Semantic Interaction for Visual Analytics: Toward Coupling Cognition and Computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander
2014-07-01
The dissertation discussed in this article [1] was written in the midst of an era of digitization. The world is becoming increasingly instrumented with sensors, monitoring, and other methods for generating data describing social, physical, and natural phenomena. Thus, data exist with the potential of being analyzed to uncover, or discover, the phenomena from which it was created. However, as the analytic models leveraged to analyze these data continue to increase in complexity and computational capability, how can visualizations and user interaction methodologies adapt and evolve to continue to foster discovery and sensemaking?
NASA Astrophysics Data System (ADS)
Brandstetter, Gerd; Govindjee, Sanjay
2012-03-01
Existing analytical and numerical methodologies are discussed and then extended in order to calculate critical contamination-particle sizes, which will result in deleterious effects during EUVL E-chucking in the face of an error budget on the image-placement-error (IPE). The enhanced analytical models include a gap dependant clamping pressure formulation, the consideration of a general material law for realistic particle crushing and the influence of frictional contact. We present a discussion of the defects of the classical de-coupled modeling approach where particle crushing and mask/chuck indentation are separated from the global computation of mask bending. To repair this defect we present a new analytic approach based on an exact Hankel transform method which allows a fully coupled solution. This will capture the contribution of the mask indentation to the image-placement-error (estimated IPE increase of 20%). A fully coupled finite element model is used to validate the analytical models and to further investigate the impact of a mask back-side CrN-layer. The models are applied to existing experimental data with good agreement. For a standard material combination, a given IPE tolerance of 1 nm and a 15 kPa closing pressure, we derive bounds for single particles of cylindrical shape (radius × height < 44 μm) and spherical shape (diameter < 12 μm).
The Occurrence of Veterinary Pharmaceuticals in the Environment: A Review
Kaczala, Fabio; Blum, Shlomo E.
2016-01-01
It is well known that there is a widespread use of veterinary pharmaceuticals and consequent release into different ecosystems such as freshwater bodies and groundwater systems. Furthermore, the use of organic fertilizers produced from animal waste manure has been also responsible for the occurrence of veterinary pharmaceuticals in agricultural soils. This article is a review of different studies focused on the detection and quantification of such compounds in environmental compartments using different analytical techniques. Furthermore, this paper reports the main challenges regarding veterinary pharmaceuticals in terms of analytical methods, detection/quantification of parent compounds and metabolites, and risks/toxicity to human health and aquatic ecosystems. Based on the existing literature, it is clear that only limited data is available regarding veterinary compounds and there are still considerable gaps to be bridged in order to remediate existing problems and prevent future ones. In terms of analytical methods, there are still considerable challenges to overcome considering the large number of existing compounds and respective metabolites. A number of studies highlight the lack of attention given to the detection and quantification of transformation products and metabolites. Furthermore more attention needs to be given in relation to the toxic effects and potential risks that veterinary compounds pose to environmental and human health. To conclude, the more research investigations focused on these subjects take place in the near future, more rapidly we will get a better understanding about the behavior of these compounds and the real risks they pose to aquatic and terrestrial environments and how to properly tackle them. PMID:28579931
NASA Technical Reports Server (NTRS)
Mcmillan, O. J.; Mendenhall, M. R.; Perkins, S. C., Jr.
1984-01-01
Work is described dealing with two areas which are dominated by the nonlinear effects of vortex flows. The first area concerns the stall/spin characteristics of a general aviation wing with a modified leading edge. The second area concerns the high-angle-of-attack characteristics of high performance military aircraft. For each area, the governing phenomena are described as identified with the aid of existing experimental data. Existing analytical methods are reviewed, and the most promising method for each area used to perform some preliminary calculations. Based on these results, the strengths and weaknesses of the methods are defined, and research programs recommended to improve the methods as a result of better understanding of the flow mechanisms involved.
Analytical evaluation of current starch methods used in the international sugar industry: Part I.
Cole, Marsha; Eggleston, Gillian; Triplett, Alexa
2017-08-01
Several analytical starch methods exist in the international sugar industry to mitigate starch-related processing challenges and assess the quality of traded end-products. These methods use iodometric chemistry, mostly potato starch standards, and utilize similar solubilization strategies, but had not been comprehensively compared. In this study, industrial starch methods were compared to the USDA Starch Research method using simulated raw sugars. Type of starch standard, solubilization approach, iodometric reagents, and wavelength detection affected total starch determination in simulated raw sugars. Simulated sugars containing potato starch were more accurately detected by the industrial methods, whereas those containing corn starch, a better model for sugarcane starch, were only accurately measured by the USDA Starch Research method. Use of a potato starch standard curve over-estimated starch concentrations. Among the variables studied, starch standard, solubilization approach, and wavelength detection affected the sensitivity, accuracy/precision, and limited the detection/quantification of the current industry starch methods the most. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, William A., E-mail: wadawson@ucdavis.edu
2013-08-01
Merging galaxy clusters have become one of the most important probes of dark matter, providing evidence for dark matter over modified gravity and even constraints on the dark matter self-interaction cross-section. To properly constrain the dark matter cross-section it is necessary to understand the dynamics of the merger, as the inferred cross-section is a function of both the velocity of the collision and the observed time since collision. While the best understanding of merging system dynamics comes from N-body simulations, these are computationally intensive and often explore only a limited volume of the merger phase space allowed by observed parametermore » uncertainty. Simple analytic models exist but the assumptions of these methods invalidate their results near the collision time, plus error propagation of the highly correlated merger parameters is unfeasible. To address these weaknesses I develop a Monte Carlo method to discern the properties of dissociative mergers and propagate the uncertainty of the measured cluster parameters in an accurate and Bayesian manner. I introduce this method, verify it against an existing hydrodynamic N-body simulation, and apply it to two known dissociative mergers: 1ES 0657-558 (Bullet Cluster) and DLSCL J0916.2+2951 (Musket Ball Cluster). I find that this method surpasses existing analytic models-providing accurate (10% level) dynamic parameter and uncertainty estimates throughout the merger history. This, coupled with minimal required a priori information (subcluster mass, redshift, and projected separation) and relatively fast computation ({approx}6 CPU hours), makes this method ideal for large samples of dissociative merging clusters.« less
NASA Astrophysics Data System (ADS)
Botha, J. D. M.; Shahroki, A.; Rice, H.
2017-12-01
This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.
Extending existing structural identifiability analysis methods to mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2018-01-01
The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Suitability of analytical methods to measure solubility for the purpose of nanoregulation.
Tantra, Ratna; Bouwmeester, Hans; Bolea, Eduardo; Rey-Castro, Carlos; David, Calin A; Dogné, Jean-Michel; Jarman, John; Laborda, Francisco; Laloy, Julie; Robinson, Kenneth N; Undas, Anna K; van der Zande, Meike
2016-01-01
Solubility is an important physicochemical parameter in nanoregulation. If nanomaterial is completely soluble, then from a risk assessment point of view, its disposal can be treated much in the same way as "ordinary" chemicals, which will simplify testing and characterisation regimes. This review assesses potential techniques for the measurement of nanomaterial solubility and evaluates the performance against a set of analytical criteria (based on satisfying the requirements as governed by the cosmetic regulation as well as the need to quantify the concentration of free (hydrated) ions). Our findings show that no universal method exists. A complementary approach is thus recommended, to comprise an atomic spectrometry-based method in conjunction with an electrochemical (or colorimetric) method. This article shows that although some techniques are more commonly used than others, a huge research gap remains, related with the need to ensure data reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xibing; Dong, Longjun, E-mail: csudlj@163.com; Australian Centre for Geomechanics, The University of Western Australia, Crawley, 6009
This paper presents an efficient closed-form solution (ECS) for acoustic emission(AE) source location in three-dimensional structures using time difference of arrival (TDOA) measurements from N receivers, N ≥ 6. The nonlinear location equations of TDOA are simplified to linear equations. The unique analytical solution of AE sources for unknown velocity system is obtained by solving the linear equations. The proposed ECS method successfully solved the problems of location errors resulting from measured deviations of velocity as well as the existence and multiplicity of solutions induced by calculations of square roots in existed close-form methods.
An Integrative Wave Model for the Marginal Ice Zone Based on a Rheological Parameterization
2015-09-30
2015) Characterizing the behavior of gravity wave propagation into a floating or submerged viscous layer , 2015 AGU Joint Assembly Meeting, May 3–7...are the PI and a PhD student. Task 1: Use an analytical method to determine the propagation of waves through a floating viscoelastic mat for a wide...and Ben Holt. 2 Task 3: Assemble all existing laboratory and field data of wave propagation in ice covers. Task 4: Determine if all existing
ERIC Educational Resources Information Center
Edwards, Mark Evan; Weber, Bruce; Bernell, Stephanie
2007-01-01
An existing measure of food insecurity with hunger in the United States may serve as an effective indicator of quality of life. State level differences in that measure can reveal important differences in quality of life across places. In this study, we advocate and demonstrate two simple methods by which analysts can explore state-specific…
Thermal/structural design verification strategies for large space structures
NASA Technical Reports Server (NTRS)
Benton, David
1988-01-01
Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.
Recent developments in urinalysis of metabolites of new psychoactive substances using LC-MS.
Peters, Frank T
2014-08-01
In the last decade, an ever-increasing number of new psychoactive substances (NPSs) have appeared on the recreational drug market. To account for this development, analytical toxicologists have to continuously adapt their methods to encompass the latest NPSs. Urine is the preferred biological matrix for screening analysis in different areas of analytical toxicology. However, the development of urinalysis procedures for NPSs is complicated by the fact that generally little or no information on urinary excretion patterns of such drugs exists when they first appear on the market. Metabolism studies are therefore a prerequisite in the development of urinalysis methods for NPSs. In this article, the literature on the urinalysis of NPS metabolites will be reviewed, focusing on articles published after 2008.
On Establishing Big Data Wave Breakwaters with Analytics (Invited)
NASA Astrophysics Data System (ADS)
Riedel, M.
2013-12-01
The Research Data Alliance Big Data Analytics (RDA-BDA) Interest Group seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. RDA-BDA seeks to analyze different scientific domain applications and their potential use of various big data analytics techniques. A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. These combinations are complex since a wide variety of different data analysis algorithms exist (e.g. specific algorithms using GPUs of analyzing brain images) that need to work together with multiple analytical tools reaching from simple (iterative) map-reduce methods (e.g. with Apache Hadoop or Twister) to sophisticated higher level frameworks that leverage machine learning algorithms (e.g. Apache Mahout). These computational analysis techniques are often augmented with visual analytics techniques (e.g. computational steering on large-scale high performance computing platforms) to put the human judgement into the analysis loop or new approaches with databases that are designed to support new forms of unstructured or semi-structured data as opposed to the rather tradtional structural databases (e.g. relational databases). More recently, data analysis and underpinned analytics frameworks also have to consider energy footprints of underlying resources. To sum up, the aim of this talk is to provide pieces of information to understand big data analytics in the context of science and engineering using the aforementioned classification as the lighthouse and as the frame of reference for a systematic approach. This talk will provide insights about big data analytics methods in context of science within varios communities and offers different views of how approaches of correlation and causality offer complementary methods to advance in science and engineering today. The RDA Big Data Analytics Group seeks to understand what approaches are not only technically feasible, but also scientifically feasible. The lighthouse Goal of the RDA Big Data Analytics Group is a classification of clever combinations of various Technologies and scientific applications in order to provide clear recommendations to the scientific community what approaches are technicalla and scientifically feasible.
Passive Magnetic Bearing With Ferrofluid Stabilization
NASA Technical Reports Server (NTRS)
Jansen, Ralph; DiRusso, Eliseo
1996-01-01
A new class of magnetic bearings is shown to exist analytically and is demonstrated experimentally. The class of magnetic bearings utilize a ferrofluid/solid magnet interaction to stabilize the axial degree of freedom of a permanent magnet radial bearing. Twenty six permanent magnet bearing designs and twenty two ferrofluid stabilizer designs are evaluated. Two types of radial bearing designs are tested to determine their force and stiffness utilizing two methods. The first method is based on the use of frequency measurements to determine stiffness by utilizing an analytical model. The second method consisted of loading the system and measuring displacement in order to measure stiffness. Two ferrofluid stabilizers are tested and force displacement curves are measured. Two experimental test fixtures are designed and constructed in order to conduct the stiffness testing. Polynomial models of the data are generated and used to design the bearing prototype. The prototype was constructed and tested and shown to be stable. Further testing shows the possibility of using this technology for vibration isolation. The project successfully demonstrated the viability of the passive magnetic bearing with ferrofluid stabilization both experimentally and analytically.
Development of a point-kinetic verification scheme for nuclear reactor applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demazière, C., E-mail: demaz@chalmers.se; Dykin, V.; Jareteg, K.
In this paper, a new method that can be used for checking the proper implementation of time- or frequency-dependent neutron transport models and for verifying their ability to recover some basic reactor physics properties is proposed. This method makes use of the application of a stationary perturbation to the system at a given frequency and extraction of the point-kinetic component of the system response. Even for strongly heterogeneous systems for which an analytical solution does not exist, the point-kinetic component follows, as a function of frequency, a simple analytical form. The comparison between the extracted point-kinetic component and its expectedmore » analytical form provides an opportunity to verify and validate neutron transport solvers. The proposed method is tested on two diffusion-based codes, one working in the time domain and the other working in the frequency domain. As long as the applied perturbation has a non-zero reactivity effect, it is demonstrated that the method can be successfully applied to verify and validate time- or frequency-dependent neutron transport solvers. Although the method is demonstrated in the present paper in a diffusion theory framework, higher order neutron transport methods could be verified based on the same principles.« less
Wu, Chung-Shu; Liu, Fu-Ken; Ko, Fu-Hsiang
2011-01-01
Nanoparticle-based material is a revolutionary scientific and engineering venture that will invariably impact the existing analytical separation and preconcentration for a variety of analytes. Nanoparticles can be regarded as a hybrid between small molecule and bulk material. A material on the nanoscale produces considerable changes on various properties, making them size- and shape-dependent. Gold nanoparticles (Au NPs), one of the wide variety of core materials available, coupled with tunable surface properties in the form of inorganic or inorganic-organic hybrid have been reported as an excellent platform for a broad range of analytical methods. This review aims to introduce the basic principles, examples, and descriptions of methods for the characterization of Au NPs by using chromatography, electrophoresis, and self-assembly strategies for separation science. Some of the latest important applications of using Au NPs as stationary phases toward open-tubular capillary electrochromatography, gas chromatography, and liquid chromatography as well as roles of run buffer additive to enhance separation and preconcentration in the field of chromatographic, electrophoretic and in chip-based systems are reviewed. Additionally, we review Au NPs-assisted state-of-the-art techniques involving the use of micellar electrokinetic chromatography, an online diode array detector, solid-phase extraction, and mass spectrometry for the preconcentration of some chemical compounds and biomolecules.
ERIC Educational Resources Information Center
Bechler, Kent L.
2013-01-01
In the book "Moneyball: The Art of Winning an Unfair Game," author Michael Lewis describes how the Oakland Athletics, one of the poorest teams in Major League Baseball, became one of the most successful teams by using simple analytical methods that had existed for years but had been largely ignored by the baseball executives. "Moneyball"…
2005-07-01
approach for measuring the return on Information Technology (IT) investments. A review of existing methods suggests the difficulty in adequately...measuring the returns of IT at various levels of analysis (e.g., firm or process level). To address this issue, this study aims to develop a method for...view (KBV), this paper proposes an analytic method for measuring the historical revenue and cost of IT investments by estimating the amount of
One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.
Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz
2009-07-15
The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.
The mean and variance of phylogenetic diversity under rarefaction
Matsen, Frederick A.
2013-01-01
Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
NASA Astrophysics Data System (ADS)
Petrova, N.; Zagidullin, A.; Nefedyev, Y.; Kosulin, V.; Andreev, A.
2017-11-01
Observing physical librations of celestial bodies and the Moon represents one of the astronomical methods of remotely assessing the internal structure of a celestial body without conducting expensive space experiments. The paper contains a review of recent advances in studying the Moon's structure using various methods of obtaining and applying the lunar physical librations (LPhL) data. In this article LPhL simulation methods of assessing viscoelastic and dissipative properties of the lunar body and lunar core parameters, whose existence has been recently confirmed during the seismic data reprocessing of ;Apollo; space mission, are described. Much attention is paid to physical interpretation of the free librations phenomenon and the methods for its determination. In the paper the practical application of the most accurate analytical LPhL tables (Rambaux and Williams, 2011) is discussed. The tables were built on the basis of complex analytical processing of the residual differences obtained when comparing long-term series of laser observations with the numerical ephemeris DE421. In the paper an efficiency analysis of two approaches to LPhL theory is conducted: the numerical and the analytical ones. It has been shown that in lunar investigation both approaches complement each other in various aspects: the numerical approach provides high accuracy of the theory, which is required for the proper processing of modern observations, the analytical approach allows to comprehend the essence of the phenomena in the lunar rotation, predict and interpret new effects in the observations of lunar body and lunar core parameters.
Tiered Approach to Resilience Assessment.
Linkov, Igor; Fox-Lent, Cate; Read, Laura; Allen, Craig R; Arnott, James C; Bellini, Emanuele; Coaffee, Jon; Florin, Marie-Valentine; Hatfield, Kirk; Hyde, Iain; Hynes, William; Jovanovic, Aleksandar; Kasperson, Roger; Katzenberger, John; Keys, Patrick W; Lambert, James H; Moss, Richard; Murdoch, Peter S; Palma-Oliveira, Jose; Pulwarty, Roger S; Sands, Dale; Thomas, Edward A; Tye, Mari R; Woods, David
2018-04-25
Regulatory agencies have long adopted a three-tier framework for risk assessment. We build on this structure to propose a tiered approach for resilience assessment that can be integrated into the existing regulatory processes. Comprehensive approaches to assessing resilience at appropriate and operational scales, reconciling analytical complexity as needed with stakeholder needs and resources available, and ultimately creating actionable recommendations to enhance resilience are still lacking. Our proposed framework consists of tiers by which analysts can select resilience assessment and decision support tools to inform associated management actions relative to the scope and urgency of the risk and the capacity of resource managers to improve system resilience. The resilience management framework proposed is not intended to supplant either risk management or the many existing efforts of resilience quantification method development, but instead provide a guide to selecting tools that are appropriate for the given analytic need. The goal of this tiered approach is to intentionally parallel the tiered approach used in regulatory contexts so that resilience assessment might be more easily and quickly integrated into existing structures and with existing policies. Published 2018. This article is a U.S. government work and is in the public domain in the USA.
Maneuver Planning for Conjunction Risk Mitigation with Ground-track Control Requirements
NASA Technical Reports Server (NTRS)
McKinley, David
2008-01-01
The planning of conjunction Risk Mitigation Maneuvers (RMM) in the presence of ground-track control requirements is analyzed. Past RMM planning efforts on the Aqua, Aura, and Terra spacecraft have demonstrated that only small maneuvers are available when ground-track control requirements are maintained. Assuming small maneuvers, analytical expressions for the effect of a given maneuver on conjunction geometry are derived. The analytical expressions are used to generate a large trade space for initial RMM design. This trade space represents a significant improvement in initial maneuver planning over existing methods that employ high fidelity maneuver models and propagation.
NASA Astrophysics Data System (ADS)
Black, S.; Hynek, B. M.; Kierein-Young, K. S.; Avard, G.; Alvarado-Induni, G.
2015-12-01
Proper characterization of mineralogy is an essential part of geologic interpretation. This process becomes even more critical when attempting to interpret the history of a region remotely, via satellites and/or landed spacecraft. Orbiters and landed missions to Mars carry with them a wide range of analytical tools to aid in the interpretation of Mars' geologic history. However, many instruments make a single type of measurement (e.g., APXS: elemental chemistry; XRD: mineralogy), and multiple data sets must be utilized to develop a comprehensive understanding of a sample. Hydrothermal alteration products often exist in intimate mixtures, and vary widely across a site due to changing pH, temperature, and fluid/gas chemistries. These characteristics require that we develop a detailed understanding regarding the possible mineral mixtures that may exist, and their detectability in different instrument data sets. This comparative analysis study utilized several analytical methods on existing or planned Mars rovers (XRD Raman, LIBS, Mössbauer, and APXS) combined with additional characterization (thin section, VNIR, XRF, SEM-EMP) to develop a comprehensive suite of data for hydrothermal alteration products collected from Poás and Turrialba volcanoes in Costa Rica. Analyzing the same samples across a wide range of instruments allows for direct comparisons of results, and identification of instrumentation "blind spots." This provides insight into the ability of in-situ analyses to comprehensively characterize sites on Mars exhibiting putative hydrothermal characteristics, such as the silica and sulfate deposits at Gusev crater [eg: Squyres et al., 2008], as well as valuable information for future mission planning and data interpretation. References: Squyres et al. (2008), Detection of Silica-Rich Deposits on Mars, Science, 320, 1063-1067, doi:10.1126/science.1155429.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Peng; Fan, Zheng, E-mail: ZFAN@ntu.edu.sg; Zhou, Yu
2016-07-15
Nonlinear guided waves have been investigated widely in simple geometries, such as plates, pipe and shells, where analytical solutions have been developed. This paper extends the application of nonlinear guided waves to waveguides with arbitrary cross sections. The criteria for the existence of nonlinear guided waves were summarized based on the finite deformation theory and nonlinear material properties. Numerical models were developed for the analysis of nonlinear guided waves in complex geometries, including nonlinear Semi-Analytical Finite Element (SAFE) method to identify internal resonant modes in complex waveguides, and Finite Element (FE) models to simulate the nonlinear wave propagation at resonantmore » frequencies. Two examples, an aluminum plate and a steel rectangular bar, were studied using the proposed numerical model, demonstrating the existence of nonlinear guided waves in such structures and the energy transfer from primary to secondary modes.« less
da Trindade, Mariana Teixeira; Kogawa, Ana Carolina; Salgado, Hérida Regina Nunes
2018-01-02
Diabetes mellitus (DM) is considered a public health problem. The initial treatment consists of improving the lifestyle and making changes in the diet. When these changes are not enough, the use of medication becomes necessary. The metformin aims to reduce the hepatic production of glucose and is the preferred treatment for type 2. The objective is to survey the characteristics and properties of metformin, as well as hold a discussion on the existing analytical methods to green chemistry and their impacts for both the operator and the environment. For the survey, data searches were conducted by scientific papers in the literature as well as in official compendium. The characteristics and properties are shown, also, methods using liquid chromatography techniques, titration, absorption spectrophotometry in the ultraviolet and the infrared region. Most of the methods presented are not green chemistry oriented. It is necessary the awareness of everyone involved in the optimization of the methods applied through the implementation of green chemistry to determine the metformin.
Selectivity in analytical chemistry: two interpretations for univariate methods.
Dorkó, Zsanett; Verbić, Tatjana; Horvai, George
2015-01-01
Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity. Copyright © 2014 Elsevier B.V. All rights reserved.
Probability theory versus simulation of petroleum potential in play analysis
Crovelli, R.A.
1987-01-01
An analytic probabilistic methodology for resource appraisal of undiscovered oil and gas resources in play analysis is presented. This play-analysis methodology is a geostochastic system for petroleum resource appraisal in explored as well as frontier areas. An objective was to replace an existing Monte Carlo simulation method in order to increase the efficiency of the appraisal process. Underlying the two methods is a single geologic model which considers both the uncertainty of the presence of the assessed hydrocarbon and its amount if present. The results of the model are resource estimates of crude oil, nonassociated gas, dissolved gas, and gas for a geologic play in terms of probability distributions. The analytic method is based upon conditional probability theory and a closed form solution of all means and standard deviations, along with the probabilities of occurrence. ?? 1987 J.C. Baltzer A.G., Scientific Publishing Company.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
ERIC Educational Resources Information Center
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately…
Laborda, Francisco; Bolea, Eduardo; Cepriá, Gemma; Gómez, María T; Jiménez, María S; Pérez-Arantegui, Josefina; Castillo, Juan R
2016-01-21
The increasing demand of analytical information related to inorganic engineered nanomaterials requires the adaptation of existing techniques and methods, or the development of new ones. The challenge for the analytical sciences has been to consider the nanoparticles as a new sort of analytes, involving both chemical (composition, mass and number concentration) and physical information (e.g. size, shape, aggregation). Moreover, information about the species derived from the nanoparticles themselves and their transformations must also be supplied. Whereas techniques commonly used for nanoparticle characterization, such as light scattering techniques, show serious limitations when applied to complex samples, other well-established techniques, like electron microscopy and atomic spectrometry, can provide useful information in most cases. Furthermore, separation techniques, including flow field flow fractionation, capillary electrophoresis and hydrodynamic chromatography, are moving to the nano domain, mostly hyphenated to inductively coupled plasma mass spectrometry as element specific detector. Emerging techniques based on the detection of single nanoparticles by using ICP-MS, but also coulometry, are in their way to gain a position. Chemical sensors selective to nanoparticles are in their early stages, but they are very promising considering their portability and simplicity. Although the field is in continuous evolution, at this moment it is moving from proofs-of-concept in simple matrices to methods dealing with matrices of higher complexity and relevant analyte concentrations. To achieve this goal, sample preparation methods are essential to manage such complex situations. Apart from size fractionation methods, matrix digestion, extraction and concentration methods capable of preserving the nature of the nanoparticles are being developed. This review presents and discusses the state-of-the-art analytical techniques and sample preparation methods suitable for dealing with complex samples. Single- and multi-method approaches applied to solve the nanometrological challenges posed by a variety of stakeholders are also presented. Copyright © 2015 Elsevier B.V. All rights reserved.
Boundary cooled rocket engines for space storable propellants
NASA Technical Reports Server (NTRS)
Kesselring, R. C.; Mcfarland, B. L.; Knight, R. M.; Gurnitz, R. N.
1972-01-01
An evaluation of an existing analytical heat transfer model was made to develop the technology of boundary film/conduction cooled rocket thrust chambers to the space storable propellant combination oxygen difluoride/diborane. Critical design parameters were identified and their importance determined. Test reduction methods were developed to enable data obtained from short duration hot firings with a thin walled (calorimeter) chamber to be used quantitatively evaluate the heat absorbing capability of the vapor film. The modification of the existing like-doublet injector was based on the results obtained from the calorimeter firings.
Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam
2017-01-18
While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels.
NASA Astrophysics Data System (ADS)
Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam
2017-03-01
While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels. Dedicated to Professor Kankan Bhattacharyya.
Curran, Patrick J.; Howard, Andrea L.; Bainter, Sierra; Lane, Stephanie T.; McGinley, James S.
2014-01-01
Objective Although recent statistical and computational developments allow for the empirical testing of psychological theories in ways not previously possible, one particularly vexing challenge remains: how to optimally model the prospective, reciprocal relations between two constructs as they developmentally unfold over time. Several analytic methods currently exist that attempt to model these types of relations, and each approach is successful to varying degrees. However, none provide the unambiguous separation of between-person and within-person components of stability and change over time, components that are often hypothesized to exist in the psychological sciences. The goal of our paper is to propose and demonstrate a novel extension of the multivariate latent curve model to allow for the disaggregation of these effects. Method We begin with a review of the standard latent curve models and describe how these primarily capture between-person differences in change. We then extend this model to allow for regression structures among the time-specific residuals to capture within-person differences in change. Results We demonstrate this model using an artificial data set generated to mimic the developmental relation between alcohol use and depressive symptomatology spanning five repeated measures. Conclusions We obtain a specificity of results from the proposed analytic strategy that are not available from other existing methodologies. We conclude with potential limitations of our approach and directions for future research. PMID:24364798
A Machine-Learning-Driven Sky Model.
Satylmys, Pynar; Bashford-Rogers, Thomas; Chalmers, Alan; Debattista, Kurt
2017-01-01
Sky illumination is responsible for much of the lighting in a virtual environment. A machine-learning-based approach can compactly represent sky illumination from both existing analytic sky models and from captured environment maps. The proposed approach can approximate the captured lighting at a significantly reduced memory cost and enable smooth transitions of sky lighting to be created from a small set of environment maps captured at discrete times of day. The author's results demonstrate accuracy close to the ground truth for both analytical and capture-based methods. The approach has a low runtime overhead, so it can be used as a generic approach for both offline and real-time applications.
NASA Astrophysics Data System (ADS)
Bisegna, Paolo; Caselli, Federica
2008-06-01
This paper presents a simple analytical expression for the effective complex conductivity of a periodic hexagonal arrangement of conductive circular cylinders embedded in a conductive matrix, with interfaces exhibiting a capacitive impedance. This composite material may be regarded as an idealized model of a biological tissue comprising tubular cells, such as skeletal muscle. The asymptotic homogenization method is adopted, and the corresponding local problem is solved by resorting to Weierstrass elliptic functions. The effectiveness of the present analytical result is proved by convergence analysis and comparison with finite-element solutions and existing models.
Zimmerman, Christian E.; Nielsen, Roger L.
2003-01-01
The use of strontium-to-calcium (Sr/Ca) ratios in otoliths is becoming a standard method to describe life history type and the chronology of migrations between freshwater and seawater habitats in teleosts (e.g. Kalish, 1990; Radtke et al., 1990; Secor, 1992; Rieman et al., 1994; Radtke, 1995; Limburg, 1995; Tzeng et al. 1997; Volk et al., 2000; Zimmerman, 2000; Zimmerman and Reeves, 2000, 2002). This method provides critical information concerning the relationship and ecology of species exhibiting phenotypic variation in migratory behavior (Kalish, 1990; Secor, 1999). Methods and procedures, however, vary among laboratories because a standard method or protocol for measurement of Sr in otoliths does not exist. In this note, we examine the variations in analytical conditions in an effort to increase precision of Sr/Ca measurements. From these findings we argue that precision can be maximized with higher beam current (although there is specimen damage) than previously recommended by Gunn et al. (1992).
Species-specific detection of processed animal proteins in feed by Raman spectroscopy.
Mandrile, Luisa; Amato, Giuseppina; Marchis, Daniela; Martra, Gianmario; Rossi, Andrea Mario
2017-08-15
The existing European Regulation (EC n° 51/2013) prohibits the use of animals meals in feedstuffs in order to prevent Bovine Spongiform Encephalopathy infection and diffusion, however the legislation is rapidly moving towards a partial lifting of the "feed ban" and the competent control organisms are urged to develop suitable analytical methods able to avoid food safety incidents related to animal origin products. The limitations of the official methods (i.e. light microscopy and Polymerase Chain Reaction) suggest exploring new analytic ways to get reliable results in a short time. The combination of spectroscopic techniques with optical microscopy allows the development of an individual particle method able to meet both selectivity and sensitivity requirements (0.1%w/w). A spectroscopic method based on Fourier Transform micro-Raman spectroscopy coupled with Discriminant Analysis is here presented. This approach could be very useful for in-situ applications, such as customs inspections, since it drastically reduces time and costs of analysis. Copyright © 2017. Published by Elsevier Ltd.
Assessment of technological level of stem cell research using principal component analysis.
Do Cho, Sung; Hwan Hyun, Byung; Kim, Jae Kyeom
2016-01-01
In general, technological levels have been assessed based on specialist's opinion through the methods such as Delphi. But in such cases, results could be significantly biased per study design and individual expert. In this study, therefore scientific literatures and patents were selected by means of analytic indexes for statistic approach and technical assessment of stem cell fields. The analytic indexes, numbers and impact indexes of scientific literatures and patents, were weighted based on principal component analysis, and then, were summated into the single value. Technological obsolescence was calculated through the cited half-life of patents issued by the United States Patents and Trademark Office and was reflected in technological level assessment. As results, ranks of each nation's in reference to the technology level were rated by the proposed method. Furthermore we were able to evaluate strengthens and weaknesses thereof. Although our empirical research presents faithful results, in the further study, there is a need to compare the existing methods and the suggested method.
Developing Healthcare Data Analytics APPs with Open Data Science Tools.
Hao, Bibo; Sun, Wen; Yu, Yiqin; Xie, Guotong
2017-01-01
Recent advances in big data analytics provide more flexible, efficient, and open tools for researchers to gain insight from healthcare data. Whilst many tools require researchers to develop programs with programming languages like Python, R and so on, which is not a skill set grasped by many researchers in the healthcare data analytics area. To make data science more approachable, we explored existing tools and developed a practice that can help data scientists convert existing analytics pipelines to user-friendly analytics APPs with rich interactions and features of real-time analysis. With this practice, data scientists can develop customized analytics pipelines as APPs in Jupyter Notebook and disseminate them to other researchers easily, and researchers can benefit from the shared notebook to perform analysis tasks or reproduce research results much more easily.
The impact of capillary backpressure on spontaneous counter-current imbibition in porous media
NASA Astrophysics Data System (ADS)
Foley, Amir Y.; Nooruddin, Hasan A.; Blunt, Martin J.
2017-09-01
We investigate the impact of capillary backpressure on spontaneous counter-current imbibition. For such displacements in strongly water-wet systems, the non-wetting phase is forced out through the inlet boundary as the wetting phase imbibes into the rock, creating a finite capillary backpressure. Under the assumption that capillary backpressure depends on the water saturation applied at the inlet boundary of the porous medium, its impact is determined using the continuum modelling approach by varying the imposed inlet saturation in the analytical solution. We present analytical solutions for the one-dimensional incompressible horizontal displacement of a non-wetting phase by a wetting phase in a porous medium. There exists an inlet saturation value above which any change in capillary backpressure has a negligible impact on the solutions. Above this threshold value, imbibition rates and front positions are largely invariant. A method for identifying this inlet saturation is proposed using an analytical procedure and we explore how varying multiphase flow properties affects the analytical solutions and this threshold saturation. We show the value of this analytical approach through the analysis of previously published experimental data.
An automated protocol for performance benchmarking a widefield fluorescence microscope.
Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T
2014-11-01
Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.
Designing stellarator coils by a modified Newton method using FOCUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
NASA Technical Reports Server (NTRS)
Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.
1989-01-01
The results of a NASA investigation of a claimed Overlap between two gust response analysis methods: the Statistical Discrete Gust (SDG) Method and the Power Spectral Density (PSD) Method are presented. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented for several different airplanes at several different flight conditions indicate that such an Overlap does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.
Designing stellarator coils by a modified Newton method using FOCUS
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
2018-06-01
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Designing stellarator coils by a modified Newton method using FOCUS
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...
2018-03-22
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Defining the Field of Existence of Shrouded Blades in High-Speed Gas Turbines
NASA Astrophysics Data System (ADS)
Belousov, Anatoliy I.; Nazdrachev, Sergeiy V.
2018-01-01
This work provides a method for determining the region of existence of banded blades of gas turbines for aircraft engines based on the analytical evaluation of tensile stresses in specific characteristic sections of the blade. This region is determined by the set of values of the parameter, which forms the law of distribution of the cross-sectional area of the cross-sections along the height of the airfoil. When seven independent parameters (gas-dynamic, structural and strength) are changed, the choice of the best option is proposed at the early design stage. As an example, the influence of the dimension of a turbine on the domain of the existence of banded blades is shown.
Morbioli, Giorgio Gianini; Mazzu-Nascimento, Thiago; Milan, Luis Aparecido; Stockton, Amanda M; Carrilho, Emanuel
2017-05-02
Paper-based devices are a portable, user-friendly, and affordable technology that is one of the best analytical tools for inexpensive diagnostic devices. Three-dimensional microfluidic paper-based analytical devices (3D-μPADs) are an evolution of single layer devices and they permit effective sample dispersion, individual layer treatment, and multiplex analytical assays. Here, we present the rational design of a wax-printed 3D-μPAD that enables more homogeneous permeation of fluids along the cellulose matrix than other existing designs in the literature. Moreover, we show the importance of the rational design of channels on these devices using glucose oxidase, peroxidase, and 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) reactions. We present an alternative method for layer stacking using a magnetic apparatus, which facilitates fluidic dispersion and improves the reproducibility of tests performed on 3D-μPADs. We also provide the optimized designs for printing, facilitating further studies using 3D-μPADs.
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm
2014-01-01
Monitoring the misuse of drugs and the abuse of substances and methods potentially or evidently improving athletic performance by analytical chemistry strategies is one of the main pillars of modern anti-doping efforts. Owing to the continuously growing knowledge in medicine, pharmacology, and (bio)chemistry, new chemical entities are frequently established and developed, various of which present a temptation for sportsmen and women due to assumed/attributed beneficial effects of such substances and preparations on, for example, endurance, strength, and regeneration. By means of new technologies, expanded existing test protocols, new insights into metabolism, distribution, and elimination of compounds prohibited by the World Anti-Doping Agency (WADA), analytical assays have been further improved in agreement with the content of the 2013 Prohibited List. In this annual banned-substance review, literature concerning human sports drug testing that was published between October 2012 and September 2013 is summarized and reviewed with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2013 John Wiley & Sons, Ltd.
Analytical approximations for the oscillators with anti-symmetric quadratic nonlinearity
NASA Astrophysics Data System (ADS)
Alal Hosen, Md.; Chowdhury, M. S. H.; Yeakub Ali, Mohammad; Faris Ismail, Ahmad
2017-12-01
A second-order ordinary differential equation involving anti-symmetric quadratic nonlinearity changes sign. The behaviour of the oscillators with an anti-symmetric quadratic nonlinearity is assumed to oscillate different in the positive and negative directions. In this reason, Harmonic Balance Method (HBM) cannot be directly applied. The main purpose of the present paper is to propose an analytical approximation technique based on the HBM for obtaining approximate angular frequencies and the corresponding periodic solutions of the oscillators with anti-symmetric quadratic nonlinearity. After applying HBM, a set of complicated nonlinear algebraic equations is found. Analytical approach is not always fruitful for solving such kinds of nonlinear algebraic equations. In this article, two small parameters are found, for which the power series solution produces desired results. Moreover, the amplitude-frequency relationship has also been determined in a novel analytical way. The presented technique gives excellent results as compared with the corresponding numerical results and is better than the existing ones.
Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods
NASA Astrophysics Data System (ADS)
Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii
2016-10-01
Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85-100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis of analytical theory / // The Russian-Japanese Workshop, 20-25 October, Tokyo (Mitaka) - Mizusawa, Japan. - 2014.
NASA Astrophysics Data System (ADS)
Chen, Li-Chieh; Huang, Mei-Jiau
2017-02-01
A 2D simulation method for a rigid body moving in an incompressible viscous fluid is proposed. It combines one of the immersed-boundary methods, the DFFD (direct forcing fictitious domain) method with the spectral element method; the former is employed for efficiently capturing the two-way FSI (fluid-structure interaction) and the geometric flexibility of the latter is utilized for any possibly co-existing stationary and complicated solid or flow boundary. A pseudo body force is imposed within the solid domain to enforce the rigid body motion and a Lagrangian mesh composed of triangular elements is employed for tracing the rigid body. In particular, a so called sub-cell scheme is proposed to smooth the discontinuity at the fluid-solid interface and to execute integrations involving Eulerian variables over the moving-solid domain. The accuracy of the proposed method is verified through an observed agreement of the simulation results of some typical flows with analytical solutions or existing literatures.
Magnuson, Matthew; Campisano, Romy; Griggs, John; Fitz-James, Schatzi; Hall, Kathy; Mapp, Latisha; Mullins, Marissa; Nichols, Tonya; Shah, Sanjiv; Silvestri, Erin; Smith, Terry; Willison, Stuart; Ernst, Hiba
2014-11-01
Catastrophic incidents can generate a large number of samples of analytically diverse types, including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for sample analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to acceptable levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illustrates the result of applying this principle, in the form of a compendium of analytical methods for contaminants of interest. The compendium is based on experience with actual incidents, where appropriate and available. This paper also discusses efforts aimed at adaptation of existing methods to increase fitness-for-purpose and development of innovative methods when necessary. The contaminants of interest are primarily those potentially released through catastrophes resulting from malicious activity. However, the same techniques discussed could also have application to catastrophes resulting from other incidents, such as natural disasters or industrial accidents. Further, the high sample throughput enabled by the techniques discussed could be employed for conventional environmental studies and compliance monitoring, potentially decreasing costs and/or increasing the quantity of data available to decision-makers. Published by Elsevier Ltd.
2010-08-01
students conducting the data capture and data entry, an analytical method known as the Task Load Index ( NASA TLX Version 2.0) was used. This method was...published by the NASA Ames Research Center in December 2003. The entire report can be found at: http://humansystems.arc.nasa.gov/groups/ TLX The...completion of each task in the survey process, surveyors were required to complete a NASA TLX form to report their assessment of the workload for
Modal method for Second Harmonic Generation in nanostructures
NASA Astrophysics Data System (ADS)
Héron, S.; Pardo, F.; Bouchon, P.; Pelouard, J.-L.; Haïdar, R.
2015-05-01
Nanophotonic devices show interesting features for nonlinear response enhancement but numerical tools are mandatory to fully determine their behaviour. To address this need, we present a numerical modal method dedicated to nonlinear optics calculations under the undepleted pump approximation. It is brie y explained in the frame of Second Harmonic Generation for both plane waves and focused beams. The nonlinear behaviour of selected nanostructures is then investigated to show comparison with existing analytical results and study the convergence of the code.
On Chaotic Behavior of Temperature Distribution in a Heat Exchanger
NASA Astrophysics Data System (ADS)
Bagyalakshmi, Morachan; Gangadharan, Saisundarakrishnan; Ganesh, Madhu
The objective of this paper is to introduce the notion of fractional derivatives in the energy equations and to study the chaotic nature of the temperature distribution in a heat exchanger with variation of temperature dependent transport properties. The governing fractional partial differential equations are transformed to a set of recurrence relations using fractional differential transform method and solved using inverse transform. The approximate analytical solution obtained by the proposed method has good agreement with the existing results.
Altürk, Ahmet
2016-01-01
Mean value theorems for both derivatives and integrals are very useful tools in mathematics. They can be used to obtain very important inequalities and to prove basic theorems of mathematical analysis. In this article, a semi-analytical method that is based on weighted mean-value theorem for obtaining solutions for a wide class of Fredholm integral equations of the second kind is introduced. Illustrative examples are provided to show the significant advantage of the proposed method over some existing techniques.
Cho, Il-Hoon; Ku, Seockmo
2017-09-30
The development of novel and high-tech solutions for rapid, accurate, and non-laborious microbial detection methods is imperative to improve the global food supply. Such solutions have begun to address the need for microbial detection that is faster and more sensitive than existing methodologies (e.g., classic culture enrichment methods). Multiple reviews report the technical functions and structures of conventional microbial detection tools. These tools, used to detect pathogens in food and food homogenates, were designed via qualitative analysis methods. The inherent disadvantage of these analytical methods is the necessity for specimen preparation, which is a time-consuming process. While some literature describes the challenges and opportunities to overcome the technical issues related to food industry legal guidelines, there is a lack of reviews of the current trials to overcome technological limitations related to sample preparation and microbial detection via nano and micro technologies. In this review, we primarily explore current analytical technologies, including metallic and magnetic nanomaterials, optics, electrochemistry, and spectroscopy. These techniques rely on the early detection of pathogens via enhanced analytical sensitivity and specificity. In order to introduce the potential combination and comparative analysis of various advanced methods, we also reference a novel sample preparation protocol that uses microbial concentration and recovery technologies. This technology has the potential to expedite the pre-enrichment step that precedes the detection process.
Addressing unmeasured confounding in comparative observational research.
Zhang, Xiang; Faries, Douglas E; Li, Hu; Stamey, James D; Imbens, Guido W
2018-04-01
Observational pharmacoepidemiological studies can provide valuable information on the effectiveness or safety of interventions in the real world, but one major challenge is the existence of unmeasured confounder(s). While many analytical methods have been developed for dealing with this challenge, they appear under-utilized, perhaps due to the complexity and varied requirements for implementation. Thus, there is an unmet need to improve understanding the appropriate course of action to address unmeasured confounding under a variety of research scenarios. We implemented a stepwise search strategy to find articles discussing the assessment of unmeasured confounding in electronic literature databases. Identified publications were reviewed and characterized by the applicable research settings and information requirements required for implementing each method. We further used this information to develop a best practice recommendation to help guide the selection of appropriate analytical methods for assessing the potential impact of unmeasured confounding. Over 100 papers were reviewed, and 15 methods were identified. We used a flowchart to illustrate the best practice recommendation which was driven by 2 critical components: (1) availability of information on the unmeasured confounders; and (2) goals of the unmeasured confounding assessment. Key factors for implementation of each method were summarized in a checklist to provide further assistance to researchers for implementing these methods. When assessing comparative effectiveness or safety in observational research, the impact of unmeasured confounding should not be ignored. Instead, we suggest quantitatively evaluating the impact of unmeasured confounding and provided a best practice recommendation for selecting appropriate analytical methods. Copyright © 2018 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y. B.; Zhu, X. W., E-mail: xiaowuzhu1026@znufe.edu.cn; Dai, H. H.
Though widely used in modelling nano- and micro- structures, Eringen’s differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen’s two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings aremore » considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.« less
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M
2008-01-01
Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163
Analytical Fuselage and Wing Weight Estimation of Transport Aircraft
NASA Technical Reports Server (NTRS)
Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.
1996-01-01
A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.
Kinematic Determination of an Unmodeled Serial Manipulator by Means of an IMU
NASA Astrophysics Data System (ADS)
Ciarleglio, Constance A.
Kinematic determination for an unmodeled manipulator is usually done through a-priori knowledge of the manipulator physical characteristics or external sensor information. The mathematics of the kinematic estimation, often based on Denavit- Hartenberg convention, are complex and have high computation requirements, in addition to being unique to the manipulator for which the method is developed. Analytical methods that can compute kinematics on-the fly have the potential to be highly beneficial in dynamic environments where different configurations and variable manipulator types are often required. This thesis derives a new screw theory based method of kinematic determination, using a single inertial measurement unit (IMU), for use with any serial, revolute manipulator. The method allows the expansion of reconfigurable manipulator design and simplifies the kinematic process for existing manipulators. A simulation is presented where the theory of the method is verified and characterized with error. The method is then implemented on an existing manipulator as a verification of functionality.
New method to calculate back-reflected radiance for isotropic scattering
NASA Astrophysics Data System (ADS)
Rinzema, Kees; ten Bosch, Jaap J.; Ferwerda, Hedzer A.; Hoenders, Bernhard J.
1996-04-01
We present a method to determine the back reflected radiance from an isotropically scattering halfspace with matched boundary. The bonus of this method lies in the fact that it is capable, in principle, to handle the case of narrow beams, something which, to our knowledge, no other analytic method can do. Essentially, the method derives from a mathematical criterion that effectively forbids the existence of solutions to the transport equation which grown exponentially as one moves away from the surface and deeper into the medium. Preliminary calculations for infinitely wide beams yield results which agree well with what is found in literature.
Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.
Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J
2017-12-01
Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.
A calibration method for fringe reflection technique based on the analytical phase-slope description
NASA Astrophysics Data System (ADS)
Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong
2018-05-01
The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.
Boundary enhanced effects on the existence of quadratic solitons
NASA Astrophysics Data System (ADS)
Chen, Manna; Zhang, Ting; Li, Wenjie; Lu, Daquan; Guo, Qi; Hu, Wei
2018-05-01
We investigate, both analytically and numerically, the boundary enhanced effects exerted on the quadratic solitons consisting of fundamental waves and oscillatory second harmonics in the presence of boundary conditions. The nonlocal analogy predicts that the soliton for fundamental wave is supported by the balance between equivalent nonlinear confinement and diffraction (or dispersion). Under Snyder and Mitchell's strongly nonlocal approximation, we obtain the analytical soliton solutions both with and without the boundary conditions to show the impact of boundary conditions. We can distinguish explicitly the nonlinear confinement between the second harmonic mutual interaction and the enhanced effects caused by remote boundaries. Those boundary enhanced effects on the existence of solitons can be positive or negative, which depend on both sample size and nonlocal parameter. The piecewise existence regime of solitons can be explained analytically. The analytical soliton solutions are verified by the numerical ones and the discrepancy between them is also discussed.
X-ray optics simulation and beamline design for the APS upgrade
NASA Astrophysics Data System (ADS)
Shi, Xianbo; Reininger, Ruben; Harder, Ross; Haeffner, Dean
2017-08-01
The upgrade of the Advanced Photon Source (APS) to a Multi-Bend Achromat (MBA) will increase the brightness of the APS by between two and three orders of magnitude. The APS upgrade (APS-U) project includes a list of feature beamlines that will take full advantage of the new machine. Many of the existing beamlines will be also upgraded to profit from this significant machine enhancement. Optics simulations are essential in the design and optimization of these new and existing beamlines. In this contribution, the simulation tools used and developed at APS, ranging from analytical to numerical methods, are summarized. Three general optical layouts are compared in terms of their coherence control and focusing capabilities. The concept of zoom optics, where two sets of focusing elements (e.g., CRLs and KB mirrors) are used to provide variable beam sizes at a fixed focal plane, is optimized analytically. The effects of figure errors on the vertical spot size and on the local coherence along the vertical direction of the optimized design are investigated.
Cuddy, Michael F; Poda, Aimee R; Moser, Robert D; Weiss, Charles A; Cairns, Carolyn; Steevens, Jeffery A
2016-01-01
Nanoscale ingredients in commercial products represent a point of emerging environmental concern due to recent findings that correlate toxicity with small particle size. A weight-of-evidence (WOE) approach based upon multiple lines of evidence (LOE) is developed here to assess nanomaterials as they exist in consumer product formulations, providing a qualitative assessment regarding the presence of nanomaterials, along with a baseline estimate of nanoparticle concentration if nanomaterials do exist. Electron microscopy, analytical separations, and X-ray detection methods were used to identify and characterize nanomaterials in sunscreen formulations. The WOE/LOE approach as applied to four commercial sunscreen products indicated that all four contained at least 10% dispersed primary particles having at least one dimension <100 nm in size. Analytical analyses confirmed that these constituents were comprised of zinc oxide (ZnO) or titanium dioxide (TiO2). The screening approaches developed herein offer a streamlined, facile means to identify potentially hazardous nanomaterial constituents with minimal abrasive processing of the raw material.
Cheng, Jian; Deriche, Rachid; Jiang, Tianzi; Shen, Dinggang; Yap, Pew-Thian
2014-11-01
Spherical Deconvolution (SD) is commonly used for estimating fiber Orientation Distribution Functions (fODFs) from diffusion-weighted signals. Existing SD methods can be classified into two categories: 1) Continuous Representation based SD (CR-SD), where typically Spherical Harmonic (SH) representation is used for convenient analytical solutions, and 2) Discrete Representation based SD (DR-SD), where the signal profile is represented by a discrete set of basis functions uniformly oriented on the unit sphere. A feasible fODF should be non-negative and should integrate to unity throughout the unit sphere S(2). However, to our knowledge, most existing SH-based SD methods enforce non-negativity only on discretized points and not the whole continuum of S(2). Maximum Entropy SD (MESD) and Cartesian Tensor Fiber Orientation Distributions (CT-FOD) are the only SD methods that ensure non-negativity throughout the unit sphere. They are however computational intensive and are susceptible to errors caused by numerical spherical integration. Existing SD methods are also known to overestimate the number of fiber directions, especially in regions with low anisotropy. DR-SD introduces additional error in peak detection owing to the angular discretization of the unit sphere. This paper proposes a SD framework, called Non-Negative SD (NNSD), to overcome all the limitations above. NNSD is significantly less susceptible to the false-positive peaks, uses SH representation for efficient analytical spherical deconvolution, and allows accurate peak detection throughout the whole unit sphere. We further show that NNSD and most existing SD methods can be extended to work on multi-shell data by introducing a three-dimensional fiber response function. We evaluated NNSD in comparison with Constrained SD (CSD), a quadratic programming variant of CSD, MESD, and an L1-norm regularized non-negative least-squares DR-SD. Experiments on synthetic and real single-/multi-shell data indicate that NNSD improves estimation performance in terms of mean difference of angles, peak detection consistency, and anisotropy contrast between isotropic and anisotropic regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M
2008-11-07
Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit.
NASA Astrophysics Data System (ADS)
Miquel, Benjamin
The dynamic or seismic behavior of hydraulic structures is, as for conventional structures, essential to assure protection of human lives. These types of analyses also aim at limiting structural damage caused by an earthquake to prevent rupture or collapse of the structure. The particularity of these hydraulic structures is that not only the internal displacements are caused by the earthquake, but also by the hydrodynamic loads resulting from fluid-structure interaction. This thesis reviews the existing complex and simplified methods to perform such dynamic analysis for hydraulic structures. For the complex existing methods, attention is placed on the difficulties arising from their use. Particularly, interest is given in this work on the use of transmitting boundary conditions to simulate the semi infinity of reservoirs. A procedure has been developed to estimate the error that these boundary conditions can introduce in finite element dynamic analysis. Depending on their formulation and location, we showed that they can considerably affect the response of such fluid-structure systems. For practical engineering applications, simplified procedures are still needed to evaluate the dynamic behavior of structures in contact with water. A review of the existing simplified procedures showed that these methods are based on numerous simplifications that can affect the prediction of the dynamic behavior of such systems. One of the main objectives of this thesis has been to develop new simplified methods that are more accurate than those existing. First, a new spectral analysis method has been proposed. Expressions for the fundamental frequency of fluid-structure systems, key parameter of spectral analysis, have been developed. We show that this new technique can easily be implemented in a spreadsheet or program, and that its calculation time is near instantaneous. When compared to more complex analytical or numerical method, this new procedure yields excellent prediction of the dynamic behavior of fluid-structure systems. Spectral analyses ignore the transient and oscillatory nature of vibrations. When such dynamic analyses show that some areas of the studied structure undergo excessive stresses, time history analyses allow a better estimate of the extent of these zones as well as a time notion of these excessive stresses. Furthermore, the existing spectral analyses methods for fluid-structure systems account only for the static effect of higher modes. Thought this can generally be sufficient for dams, for flexible structures the dynamic effect of these modes should be accounted for. New methods have been developed for fluid-structure systems to account for these observations as well as the flexibility of foundations. A first method was developed to study structures in contact with one or two finite or infinite water domains. This new technique includes flexibility of structures and foundations as well as the dynamic effect of higher vibration modes and variations of the levels of the water domains. Extension of this method was performed to study beam structures in contact with fluids. These new developments have also allowed extending existing analytical formulations of the dynamic properties of a dry beam to a new formulation that includes effect of fluid-structure interaction. The method yields a very good estimate of the dynamic behavior of beam-fluid systems or beam like structures in contact with fluid. Finally, a Modified Accelerogram Method (MAM) has been developed to modify the design earthquake into a new accelerogram that directly accounts for the effect of fluid-structure interaction. This new accelerogram can therefore be applied directly to the dry structure (i.e. without water) in order to calculate the dynamic response of the fluid-structure system. This original technique can include numerous parameters that influence the dynamic response of such systems and allows to treat analytically the fluid-structure interaction while keeping the advantages of finite element modeling.
ERIC Educational Resources Information Center
Lopez Flores, Emily
2014-01-01
Research has been conducted to identify and analyze how schools are determining that the activities of their Professional Learning Community (PLC) are directly tied to student achievement as there is currently a gap in the existing literature with regards to this topic. For the purpose of this study, a "successful" PLC was defined as one…
Public Policy on the Status of Women: Agenda and Strategy for the 70s.
ERIC Educational Resources Information Center
Murphy, Irene L.
The book, using analytical methods of political science, provides an initial overall study of the formation of national policy on the status of women. It also focuses on factors most likely to influence the future course of the women's rights movement. Concentration is on existing policy from the end of the Johnson presidency on into the women's…
Atkinson, David A.
2002-01-01
Methods and apparatus for ion mobility spectrometry and analyte detection and identification verification system are disclosed. The apparatus is configured to be used in an ion mobility spectrometer and includes a plurality of reactant reservoirs configured to contain a plurality of reactants which can be reacted with the sample to form adducts having varying ion mobilities. A carrier fluid, such as air or nitrogen, is used to carry the sample into the spectrometer. The plurality of reactants are configured to be selectively added to the carrier stream by use inlet and outlet manifolds in communication with the reagent reservoirs, the reservoirs being selectively isolatable by valves. The invention further includes a spectrometer having the reagent system described. In the method, a first reactant is used with the sample. Following a positive result, a second reactant is used to determine whether a predicted response occurs. The occurrence of the second predicted response tends to verify the existence of a component of interest within the sample. A third reactant can also be used to provide further verification of the existence of a component of interest. A library can be established of known responses of compounds of interest with various reactants and the results of a specific multi-reactant survey of a sample can be compared against the library to determine whether a component detected in the sample is likely to be a specific component of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.; Miki, K.; Uzawa, K.
2006-11-30
During the past years the understanding of the multi scale interaction problems have increased significantly. However, at present there exists a flora of different analytical models for investigating multi scale interactions and hardly any specific comparisons have been performed among these models. In this work two different models for the generation of zonal flows from ion-temperature-gradient (ITG) background turbulence are discussed and compared. The methods used are the coherent mode coupling model and the wave kinetic equation model (WKE). It is shown that the two models give qualitatively the same results even though the assumption on the spectral difference ismore » used in the (WKE) approach.« less
Pressure and wall shear stress in blood hammer - Analytical theory.
Mei, Chiang C; Jing, Haixiao
2016-10-01
We describe an analytical theory of blood hammer in a long and stiffened artery due to sudden blockage. Based on the model of a viscous fluid in laminar flow, we derive explicit expressions of oscillatory pressure and wall shear stress. To examine the effects on local plaque formation we also allow the blood vessel radius to be slightly nonuniform. Without resorting to discrete computation, the asymptotic method of multiple scales is utilized to deal with the sharp contrast of time scales. The effects of plaque and blocking time on blood pressure and wall shear stress are studied. The theory is validated by comparison with existing water hammer experiments. Copyright © 2016. Published by Elsevier Inc.
Numerical studies of the Bethe-Salpeter equation for a two-fermion bound state
NASA Astrophysics Data System (ADS)
de Paula, W.; Frederico, T.; Salmè, G.; Viviani, M.
2018-03-01
Some recent advances on the solution of the Bethe-Salpeter equation (BSE) for a two-fermion bound system directly in Minkowski space are presented. The calculations are based on the expression of the Bethe-Salpeter amplitude in terms of the so-called Nakanishi integral representation and on the light-front projection (i.e. the integration of the light-front variable k - = k 0 - k 3). The latter technique allows for the analytically exact treatment of the singularities plaguing the two-fermion BSE in Minkowski space. The good agreement observed between our results and those obtained using other existing numerical methods, based on both Minkowski and Euclidean space techniques, fully corroborate our analytical treatment.
Prediction of light aircraft interior noise
NASA Technical Reports Server (NTRS)
Howlett, J. T.; Morales, D. A.
1976-01-01
At the present time, predictions of aircraft interior noise depend heavily on empirical correction factors derived from previous flight measurements. However, to design for acceptable interior noise levels and to optimize acoustic treatments, analytical techniques which do not depend on empirical data are needed. This paper describes a computerized interior noise prediction method for light aircraft. An existing analytical program (developed for commercial jets by Cockburn and Jolly in 1968) forms the basis of some modal analysis work which is described. The accuracy of this modal analysis technique for predicting low-frequency coupled acoustic-structural natural frequencies is discussed along with trends indicating the effects of varying parameters such as fuselage length and diameter, structural stiffness, and interior acoustic absorption.
From empirical data to time-inhomogeneous continuous Markov processes.
Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G
2016-03-01
We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.
Leung, Elvis M K; Tang, Phyllis N Y; Ye, Yuran; Chan, Wan
2013-10-16
2-Alkylcyclobutanones (2-ACBs) have long been considered as unique radiolytic products that can be used as indicators for irradiated food identification. A recent report on the natural existence of 2-ACB in non-irradiated nutmeg and cashew nut samples aroused worldwide concern because it contradicts the general belief that 2-ACBs are specific to irradiated food. The goal of this study is to test the natural existence of 2-ACBs in nut samples using our newly developed liquid chromatography-tandem mass spectrometry (LC-MS/MS) method with enhanced analytical sensitivity and selectivity ( Ye , Y. ; Liu , H. ; Horvatovich , P. ; Chan , W. Liquid chromatography-electrospray ionization tandem mass spectrometric analysis of 2-alkylcyclobutanones in irradiated chicken by precolumn derivatization with hydroxylamine . J. Agric. Food Chem. 2013 , 61 , 5758 - 5763 ). The validated method was applied to identify 2-dodecylcyclobutanone (2-DCB) and 2-tetradecylcyclobutanone (2-TCB) in nutmeg, cashew nut, pine nut, and apricot kernel samples (n = 22) of different origins. Our study reveals that 2-DCB and 2-TCB either do not exist naturally or exist at concentrations below the detection limit of the existing method. Thus, 2-DCB and 2-TCB are still valid to be used as biomarkers for identifying irradiated food.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
Reed, M S; Podesta, G; Fazey, I; Geeson, N; Hessel, R; Hubacek, K; Letson, D; Nainggolan, D; Prell, C; Rickenbach, M G; Ritsema, C; Schwilch, G; Stringer, L C; Thomas, A D
2013-10-01
Experts working on behalf of international development organisations need better tools to assist land managers in developing countries maintain their livelihoods, as climate change puts pressure on the ecosystem services that they depend upon. However, current understanding of livelihood vulnerability to climate change is based on a fractured and disparate set of theories and methods. This review therefore combines theoretical insights from sustainable livelihoods analysis with other analytical frameworks (including the ecosystem services framework, diffusion theory, social learning, adaptive management and transitions management) to assess the vulnerability of rural livelihoods to climate change. This integrated analytical framework helps diagnose vulnerability to climate change, whilst identifying and comparing adaptation options that could reduce vulnerability, following four broad steps: i) determine likely level of exposure to climate change, and how climate change might interact with existing stresses and other future drivers of change; ii) determine the sensitivity of stocks of capital assets and flows of ecosystem services to climate change; iii) identify factors influencing decisions to develop and/or adopt different adaptation strategies, based on innovation or the use/substitution of existing assets; and iv) identify and evaluate potential trade-offs between adaptation options. The paper concludes by identifying interdisciplinary research needs for assessing the vulnerability of livelihoods to climate change.
Reed, M.S.; Podesta, G.; Fazey, I.; Geeson, N.; Hessel, R.; Hubacek, K.; Letson, D.; Nainggolan, D.; Prell, C.; Rickenbach, M.G.; Ritsema, C.; Schwilch, G.; Stringer, L.C.; Thomas, A.D.
2013-01-01
Experts working on behalf of international development organisations need better tools to assist land managers in developing countries maintain their livelihoods, as climate change puts pressure on the ecosystem services that they depend upon. However, current understanding of livelihood vulnerability to climate change is based on a fractured and disparate set of theories and methods. This review therefore combines theoretical insights from sustainable livelihoods analysis with other analytical frameworks (including the ecosystem services framework, diffusion theory, social learning, adaptive management and transitions management) to assess the vulnerability of rural livelihoods to climate change. This integrated analytical framework helps diagnose vulnerability to climate change, whilst identifying and comparing adaptation options that could reduce vulnerability, following four broad steps: i) determine likely level of exposure to climate change, and how climate change might interact with existing stresses and other future drivers of change; ii) determine the sensitivity of stocks of capital assets and flows of ecosystem services to climate change; iii) identify factors influencing decisions to develop and/or adopt different adaptation strategies, based on innovation or the use/substitution of existing assets; and iv) identify and evaluate potential trade-offs between adaptation options. The paper concludes by identifying interdisciplinary research needs for assessing the vulnerability of livelihoods to climate change. PMID:25844020
Mutually unbiased bases and semi-definite programming
NASA Astrophysics Data System (ADS)
Brierley, Stephen; Weigert, Stefan
2010-11-01
A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Gröbner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.
Nonlinear analysis for dual-frequency concurrent energy harvesting
NASA Astrophysics Data System (ADS)
Yan, Zhimiao; Lei, Hong; Tan, Ting; Sun, Weipeng; Huang, Wenhu
2018-05-01
The dual-frequency responses of the hybrid energy harvester undergoing the base excitation and galloping were analyzed numerically. In this work, an approximate dual-frequency analytical method is proposed for the nonlinear analysis of such a system. To obtain the approximate analytical solutions of the full coupled distributed-parameter model, the forcing interactions is first neglected. Then, the electromechanical decoupled governing equation is developed using the equivalent structure method. The hybrid mechanical response is finally separated to be the self-excited and forced responses for deriving the analytical solutions, which are confirmed by the numerical simulations of the full coupled model. The forced response has great impacts on the self-excited response. The boundary of Hopf bifurcation is analytically determined by the onset wind speed to galloping, which is linearly increased by the electrical damping. Quenching phenomenon appears when the increasing base excitation suppresses the galloping. The theoretical quenching boundary depends on the forced mode velocity. The quenching region increases with the base acceleration and electrical damping, but decreases with the wind speed. Superior to the base-excitation-alone case, the existence of the aerodynamic force protects the hybrid energy harvester at resonance from damages caused by the excessive large displacement. From the view of the harvested power, the hybrid system surpasses the base-excitation-alone system or the galloping-alone system. This study advances our knowledge on intrinsic nonlinear dynamics of the dual-frequency energy harvesting system by taking advantage of the analytical solutions.
Robust volcano plot: identification of differential metabolites in the presence of outliers.
Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro
2018-04-11
The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .
Aptamer-Based Analysis: A Promising Alternative for Food Safety Control
Amaya-González, Sonia; de-los-Santos-Álvarez, Noemí; Miranda-Ordieres, Arturo J.; Lobo-Castañón, Maria Jesús
2013-01-01
Ensuring food safety is nowadays a top priority of authorities and professional players in the food supply chain. One of the key challenges to determine the safety of food and guarantee a high level of consumer protection is the availability of fast, sensitive and reliable analytical methods to identify specific hazards associated to food before they become a health problem. The limitations of existing methods have encouraged the development of new technologies, among them biosensors. Success in biosensor design depends largely on the development of novel receptors with enhanced affinity to the target, while being stable and economical. Aptamers fulfill these characteristics, and thus have surfaced as promising alternatives to natural receptors. This Review describes analytical strategies developed so far using aptamers for the control of pathogens, allergens, adulterants, toxins and other forbidden contaminants to ensure food safety. The main progresses to date are presented, highlighting potential prospects for the future. PMID:24287543
Querying and Extracting Timeline Information from Road Traffic Sensor Data
Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen
2016-01-01
The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset. PMID:27563900
ERIC Educational Resources Information Center
Buerck, John P.; Mudigonda, Srikanth P.
2014-01-01
Academic analytics and learning analytics have been increasingly adopted by academic institutions of higher learning for improving student performance and retention. While several studies have reported the implementation details and the successes of specific analytics initiatives, relatively fewer studies exist in literature that describe the…
NASA Technical Reports Server (NTRS)
Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.
1989-01-01
This paper presents the results of a NASA investigation of a claimed 'Overlap' between two gust response analysis methods: the Statistical Discrete Gust (SDG) method and the Power Spectral Density (PSD) method. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented in this paper for several different airplanes at several different flight conditions indicate that such an 'Overlap' does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.
Paradigms for machine learning
NASA Technical Reports Server (NTRS)
Schlimmer, Jeffrey C.; Langley, Pat
1991-01-01
Five paradigms are described for machine learning: connectionist (neural network) methods, genetic algorithms and classifier systems, empirical methods for inducing rules and decision trees, analytic learning methods, and case-based approaches. Some dimensions are considered along with these paradigms vary in their approach to learning, and the basic methods are reviewed that are used within each framework, together with open research issues. It is argued that the similarities among the paradigms are more important than their differences, and that future work should attempt to bridge the existing boundaries. Finally, some recent developments in the field of machine learning are discussed, and their impact on both research and applications is examined.
NASA Astrophysics Data System (ADS)
Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.
We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.
Realistic Analytical Polyhedral MRI Phantoms
Ngo, Tri M.; Fung, George S. K.; Han, Shuo; Chen, Min; Prince, Jerry L.; Tsui, Benjamin M. W.; McVeigh, Elliot R.; Herzka, Daniel A.
2015-01-01
Purpose Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing 3D analytical phantoms are unable to accurately model shapes of biomedical interest. It is demonstrated that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. Theory The derivations of the Fourier transform of a polygon and polyhedron are presented. Methods The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D/2D MRI acquisitions was described. Results Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D/2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing as compared to equivalent voxelized/rasterized phantoms. Conclusion Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. PMID:26479724
The PAC-MAN model: Benchmark case for linear acoustics in computational physics
NASA Astrophysics Data System (ADS)
Ziegelwanger, Harald; Reiter, Paul
2017-10-01
Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufmann, Ralph M., E-mail: rkaufman@math.purdue.edu; Khlebnikov, Sergei, E-mail: skhleb@physics.purdue.edu; Wehefritz-Kaufmann, Birgit, E-mail: ebkaufma@math.purdue.edu
2012-11-15
Motivated by the Double Gyroid nanowire network we develop methods to detect Dirac points and classify level crossings, aka. singularities in the spectrum of a family of Hamiltonians. The approach we use is singularity theory. Using this language, we obtain a characterization of Dirac points and also show that the branching behavior of the level crossings is given by an unfolding of A{sub n} type singularities. Which type of singularity occurs can be read off a characteristic region inside the miniversal unfolding of an A{sub k} singularity. We then apply these methods in the setting of families of graph Hamiltonians,more » such as those for wire networks. In the particular case of the Double Gyroid we analytically classify its singularities and show that it has Dirac points. This indicates that nanowire systems of this type should have very special physical properties. - Highlights: Black-Right-Pointing-Pointer New method for analytically finding Dirac points. Black-Right-Pointing-Pointer Novel relation of level crossings to singularity theory. Black-Right-Pointing-Pointer More precise version of the von-Neumann-Wigner theorem for arbitrary smooth families of Hamiltonians of fixed size. Black-Right-Pointing-Pointer Analytical proof of the existence of Dirac points for the Gyroid wire network.« less
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model
NASA Astrophysics Data System (ADS)
Ovsyannikov, I. I.; Turaev, D. V.
2017-01-01
We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.
NASA Astrophysics Data System (ADS)
Tuckness, D. G.; Jost, B.
1995-08-01
Current knowledge of the lunar gravity field is presented. The various methods used in determining these gravity fields are investigated and analyzed. It will be shown that weaknesses exist in the current models of the lunar gravity field. The dominant part of this weakness is caused by the lack of lunar tracking data information (farside, polar areas), which makes modeling the total lunar potential difficult. Comparisons of the various lunar models reveal an agreement in the low-order coefficients of the Legendre polynomials expansions. However, substantial differences in the models can exist in the higher-order harmonics. The main purpose of this study is to assess today's lunar gravity field models for use in tomorrow's lunar mission designs and operations.
A novel finite element analysis of three-dimensional circular crack
NASA Astrophysics Data System (ADS)
Ping, X. C.; Wang, C. G.; Cheng, L. P.
2018-06-01
A novel singular element containing a part of the circular crack front is established to solve the singular stress fields of circular cracks by using the numerical series eigensolutions of singular stress fields. The element is derived from the Hellinger-Reissner variational principle and can be directly incorporated into existing 3D brick elements. The singular stress fields are determined as the system unknowns appearing as displacement nodal values. The numerical studies are conducted to demonstrate the simplicity of the proposed technique in handling fracture problems of circular cracks. The usage of the novel singular element can avoid mesh refinement near the crack front domain without loss of calculation accuracy and velocity of convergence. Compared with the conventional finite element methods and existing analytical methods, the present method is more suitable for dealing with complicated structures with a large number of elements.
Enantioresolution of (RS)-baclofen by liquid chromatography: A review.
Batra, Sonika; Bhushan, Ravi
2017-01-01
Baclofen is a commonly used racemic drug and has a simple chemical structure in terms of the presence of only one stereogenic center. Since the desirable pharmacological effect is in only one enantiomer, several possibilities exist for the other enantiomer for evaluation of the disposition of the racemic mixture of the drug. This calls for the development of enantioselective analytical methodology. This review summarizes and evaluates different methods of enantioseparation of (RS)-baclofen using both direct and indirect approaches, application of certain chiral reagents and chiral stationary phases (though very expensive). Methods of separation of diastereomers of (RS)-baclofen prepared with different chiral derivatizing reagents (under microwave irradiation at ease and in less time) on reversed-phase achiral columns or via a ligand exchange approach providing high-sensitivity detection by the relatively less expensive methods of TLC and HPLC are discussed. The methods may be helpful for determination of enantiomers in biological samples and in pharmaceutical formulations for control of enantiomeric purity and can be practiced both in analytical laboratories and industry for routine analysis and R&D activities. Copyright © 2016 John Wiley & Sons, Ltd.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-24
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.
Hanson, Jeffery A; Yang, Haw
2008-11-06
The statistical properties of the cross correlation between two time series has been studied. An analytical expression for the cross correlation function's variance has been derived. On the basis of these results, a statistically robust method has been proposed to detect the existence and determine the direction of cross correlation between two time series. The proposed method has been characterized by computer simulations. Applications to single-molecule fluorescence spectroscopy are discussed. The results may also find immediate applications in fluorescence correlation spectroscopy (FCS) and its variants.
[Determination of 10-HDA in honeybee body by HPLC].
Fan, H; He, C; Han, H
1999-05-01
In the present work we found that in the honeybee body there exists an unsaturated fatty acid, trans-10-hydroxy-2-decenoic acid (10-HDA), which was known only to be present in royal jelly. We established the analytical method of 10-HDA in honeybee body by HPLC and simplified the extraction method of 10-HDA. In the optimum conditions the linear range of detection was 10-1,000 ng, the correlation coefficient was 0.9998, the recovery was 96.5%-99.2% and the detectable limit was 0.53 microgram/g.
2013-01-01
Background Healthcare delivery is largely accomplished in and through conversations between people, and healthcare quality and effectiveness depend enormously upon the communication practices employed within these conversations. An important body of evidence about these practices has been generated by conversation analysis and related discourse analytic approaches, but there has been very little systematic reviewing of this evidence. Methods We developed an approach to reviewing evidence from conversation analytic and related discursive research through the following procedures: • reviewing existing systematic review methods and our own prior experience of applying these • clarifying distinctive features of conversation analytic and related discursive work which must be taken into account when reviewing • holding discussions within a review advisory team that included members with expertise in healthcare research, conversation analytic research, and systematic reviewing • attempting and then refining procedures through conducting an actual review which examined evidence about how people talk about difficult future issues including illness progression and dying Results We produced a step-by-step guide which we describe here in terms of eight stages, and which we illustrate from our ‘Review of Future Talk’. The guide incorporates both established procedures for systematic reviewing, and new techniques designed for working with conversation analytic evidence. Conclusions The guide is designed to inform systematic reviews of conversation analytic and related discursive evidence on specific domains and topics. Whilst we designed it for reviews that aim at informing healthcare practice and policy, it is flexible and could be used for reviews with other aims, for instance those aiming to underpin research programmes and projects. We advocate systematically reviewing conversation analytic and related discursive findings using this approach in order to translate them into a form that is credible and useful to healthcare practitioners, educators and policy-makers. PMID:23721181
Charles H. Luce; Daniele Tonina; Frank Gariglio; Ralph Applebee
2013-01-01
Work over the last decade has documented methods for estimating fluxes between streams and streambeds from time series of temperature at two depths in the streambed. We present substantial extension to the existing theory and practice of using temperature time series to estimate streambed water fluxes and thermal properties, including (1) a new explicit analytical...
Predoi, Mihai Valentin
2014-09-01
The dispersion curves for hollow multilayered cylinders are prerequisites in any practical guided waves application on such structures. The equations for homogeneous isotropic materials have been established more than 120 years ago. The difficulties in finding numerical solutions to analytic expressions remain considerable, especially if the materials are orthotropic visco-elastic as in the composites used for pipes in the last decades. Among other numerical techniques, the semi-analytical finite elements method has proven its capability of solving this problem. Two possibilities exist to model a finite elements eigenvalue problem: a two-dimensional cross-section model of the pipe or a radial segment model, intersecting the layers between the inner and the outer radius of the pipe. The last possibility is here adopted and distinct differential problems are deduced for longitudinal L(0,n), torsional T(0,n) and flexural F(m,n) modes. Eigenvalue problems are deduced for the three modes classes, offering explicit forms of each coefficient for the matrices used in an available general purpose finite elements code. Comparisons with existing solutions for pipes filled with non-linear viscoelastic fluid or visco-elastic coatings as well as for a fully orthotropic hollow cylinder are all proving the reliability and ease of use of this method. Copyright © 2014 Elsevier B.V. All rights reserved.
Covaci, Adrian; Voorspoels, Stefan; Abdallah, Mohamed Abou-Elwafa; Geens, Tinne; Harrad, Stuart; Law, Robin J
2009-01-16
The present article reviews the available literature on the analytical and environmental aspects of tetrabromobisphenol-A (TBBP-A), a currently intensively used brominated flame retardant (BFR). Analytical methods, including sample preparation, chromatographic separation, detection techniques, and quality control are discussed. An important recent development in the analysis of TBBP-A is the growing tendency for liquid chromatographic techniques. At the detection stage, mass-spectrometry is a well-established and reliable technology in the identification and quantification of TBBP-A. Although interlaboratory exercises for BFRs have grown in popularity in the last 10 years, only a few participating laboratories report concentrations for TBBP-A. Environmental levels of TBBP-A in abiotic and biotic matrices are low, probably due to the major use of TBBP-A as reactive FR. As a consequence, the expected human exposure is low. This is in agreement with the EU risk assessment that concluded that there is no risk for humans concerning TBBP-A exposure. Much less analytical and environmental information exists for the various groups of TBBP-A derivatives which are largely used as additive flame retardants.
NASA Astrophysics Data System (ADS)
Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas
2016-08-01
Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement, replicate measurements are feasible, allowing the method precision to be improved potentially. Further, new analytical approaches are introduced for the accurate correction of the procedural blank and for a consistent detection of measurement outliers, which is based on δ18O-CO2 and the exchange of oxygen between CO2 and the surrounding ice (H2O).
NASA Astrophysics Data System (ADS)
Wakif, Abderrahim; Boulahia, Zoubair; Sehaqui, Rachid
2018-06-01
The main aim of the present analysis is to examine the electroconvection phenomenon that takes place in a dielectric nanofluid under the influence of a perpendicularly applied alternating electric field. In this investigation, we assume that the nanofluid has a Newtonian rheological behavior and verifies the Buongiorno's mathematical model, in which the effects of thermophoretic and Brownian diffusions are incorporated explicitly in the governing equations. Moreover, the nanofluid layer is taken to be confined horizontally between two parallel plate electrodes, heated from below and cooled from above. In a fast pulse electric field, the onset of electroconvection is due principally to the buoyancy forces and the dielectrophoretic forces. Within the framework of the Oberbeck-Boussinesq approximation and the linear stability theory, the governing stability equations are solved semi-analytically by means of the power series method for isothermal, no-slip and non-penetrability conditions. In addition, the computational implementation with the impermeability condition implies that there exists no nanoparticles mass flux on the electrodes. On the other hand, the obtained analytical solutions are validated by comparing them to those available in the literature for the limiting case of dielectric fluids. In order to check the accuracy of our semi-analytical results obtained for the case of dielectric nanofluids, we perform further numerical and semi-analytical computations by means of the Runge-Kutta-Fehlberg method, the Chebyshev-Gauss-Lobatto spectral method, the Galerkin weighted residuals technique, the polynomial collocation method and the Wakif-Galerkin weighted residuals technique. In this analysis, the electro-thermo-hydrodynamic stability of the studied nanofluid is controlled through the critical AC electric Rayleigh number Rec , whose value depends on several physical parameters. Furthermore, the effects of various pertinent parameters on the electro-thermo-hydrodynamic stability of the nanofluidic system are discussed in more detail through graphical and tabular illustrations.
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
NASA Technical Reports Server (NTRS)
Corrigan, J. C.; Cronkhite, J. D.; Dompka, R. V.; Perry, K. S.; Rogers, J. P.; Sadler, S. G.
1989-01-01
Under a research program designated Design Analysis Methods for VIBrationS (DAMVIBS), existing analytical methods are used for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM), which has been developed, extensively documented, and correlated with ground vibration test. One procedure that was used for predicting coupled rotor-fuselage vibrations using the advanced Rotorcraft Flight Simulation Program C81 and NASTRAN is summarized. Detailed descriptions of the analytical formulation of rotor dynamics equations, fuselage dynamic equations, coupling between the rotor and fuselage, and solutions to the total system of equations in C81 are included. Analytical predictions of hub shears for main rotor harmonics 2p, 4p, and 6p generated by C81 are used in conjunction with 2p OLS measured control loads and a 2p lateral tail rotor gearbox force, representing downwash impingement on the vertical fin, to excite the NASTRAN model. NASTRAN is then used to correlate with measured OLS flight test vibrations. Blade load comparisons predicted by C81 showed good agreement. In general, the fuselage vibration correlations show good agreement between anslysis and test in vibration response through 15 to 20 Hz.
Recent Advances in Paper-Based Sensors
Liana, Devi D.; Raguse, Burkhard; Gooding, J. Justin; Chow, Edith
2012-01-01
Paper-based sensors are a new alternative technology for fabricating simple, low-cost, portable and disposable analytical devices for many application areas including clinical diagnosis, food quality control and environmental monitoring. The unique properties of paper which allow passive liquid transport and compatibility with chemicals/biochemicals are the main advantages of using paper as a sensing platform. Depending on the main goal to be achieved in paper-based sensors, the fabrication methods and the analysis techniques can be tuned to fulfill the needs of the end-user. Current paper-based sensors are focused on microfluidic delivery of solution to the detection site whereas more advanced designs involve complex 3-D geometries based on the same microfluidic principles. Although paper-based sensors are very promising, they still suffer from certain limitations such as accuracy and sensitivity. However, it is anticipated that in the future, with advances in fabrication and analytical techniques, that there will be more new and innovative developments in paper-based sensors. These sensors could better meet the current objectives of a viable low-cost and portable device in addition to offering high sensitivity and selectivity, and multiple analyte discrimination. This paper is a review of recent advances in paper-based sensors and covers the following topics: existing fabrication techniques, analytical methods and application areas. Finally, the present challenges and future outlooks are discussed. PMID:23112667
Existence and analyticity of eigenvalues of a two-channel molecular resonance model
NASA Astrophysics Data System (ADS)
Lakaev, S. N.; Latipov, Sh. M.
2011-12-01
We consider a family of operators Hγμ(k), k ∈ mathbb{T}^d := (-π,π]d, associated with the Hamiltonian of a system consisting of at most two particles on a d-dimensional lattice ℤd, interacting via both a pair contact potential (μ > 0) and creation and annihilation operators (γ > 0). We prove the existence of a unique eigenvalue of Hγμ(k), k ∈ mathbb{T}^d , or its absence depending on both the interaction parameters γ,μ ≥ 0 and the system quasimomentum k ∈ mathbb{T}^d . We show that the corresponding eigenvector is analytic. We establish that the eigenvalue and eigenvector are analytic functions of the quasimomentum k ∈ mathbb{T}^d in the existence domain G ⊂ mathbb{T}^d.
NASA Astrophysics Data System (ADS)
Wang, Xin; Gao, Jun; Fan, Zhiguo; Roberts, Nicholas W.
2016-06-01
We present a computationally inexpensive analytical model for simulating celestial polarization patterns in variable conditions. We combine both the singularity theory of Berry et al (2004 New J. Phys. 6 162) and the intensity model of Perez et al (1993 Sol. Energy 50 235-245) such that our single model describes three key sets of data: (1) the overhead distribution of the degree of polarization as well as the existence of neutral points in the sky; (2) the change in sky polarization as a function of the turbidity of the atmosphere; and (3) sky polarization patterns as a function of wavelength, calculated in this work from the ultra-violet to the near infra-red. To verify the performance of our model we generate accurate reference data using a numerical radiative transfer model and statistical comparisons between these two methods demonstrate no significant difference in almost all situations. The development of our analytical model provides a novel method for efficiently calculating the overhead skylight polarization pattern. This provides a new tool of particular relevance for our understanding of animals that use the celestial polarization pattern as a source of visual information.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
NASA Astrophysics Data System (ADS)
El-Nour, K. M. A.; Salam, E. T. A.; Soliman, H. M.; Orabi, A. S.
2017-03-01
A new optical sensor was developed for rapid screening with high sensitivity for the existence of biogenic amines (BAs) in poultry meat samples. Gold nanoparticles (GNPs) with particle size 11-19 nm function as a fast and sensitive biosensor for detection of histamine resulting from bacterial decarboxylation of histidine as a spoilage marker for stored poultry meat. Upon reaction with histamine, the red color of the GNPs converted into deep blue. The appearance of blue color favorably coincides with the concentration of BAs that can induce symptoms of poisoning. This biosensor enables a semi-quantitative detection of analyte in real samples by eye-vision. Quality evaluation is carried out by measuring histamine and histidine using different analytical techniques such as UV-vis, FTIR, and fluorescence spectroscopy as well as TEM. A rapid quantitative readout of samples by UV-vis and fluorescence methods with standard instrumentation were proposed in a short time unlike chromatographic and electrophoretic methods. Sensitivity and limit of detection (LOD) of 6.59 × 10-4 and 0.6 μM, respectively, are determined for histamine as a spoilage marker with a correlation coefficient ( R 2) of 0.993.
NASA Astrophysics Data System (ADS)
Wang, Y. B.; Zhu, X. W.; Dai, H. H.
2016-08-01
Though widely used in modelling nano- and micro- structures, Eringen's differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings are considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.
Zhao, Yue; Liu, Guowen; Angeles, Aida; Christopher, Lisa J; Wang, Zhaoqing; Arnold, Mark E; Shen, Jim X
2016-10-01
Creatinine is an endogenous compound generated from creatine by normal muscular metabolism. It is an important indicator of renal function and the serum level is routinely monitored in clinical labs. Results & methodology: Surrogate analyte (d3-creatinine) was used for calibration standard and quality control preparation and the relative instrument response ratio between creatinine and d3-creatinine was used to calculate the endogenous creatinine concentrations. A fit-for-purpose strategy of using a surrogate analyte and authentic matrix was adopted for this validation. The assay was the first human plasma assay using such strategy and was successfully applied to a clinical study to confirm a transient elevation of creatinine observed using an existing clinical assay.
Qualitative dynamical analysis of chaotic plasma perturbations model
NASA Astrophysics Data System (ADS)
Elsadany, A. A.; Elsonbaty, Amr; Agiza, H. N.
2018-06-01
In this work, an analytical framework to understand nonlinear dynamics of plasma perturbations model is introduced. In particular, we analyze the model presented by Constantinescu et al. [20] which consists of three coupled ODEs and contains three parameters. The basic dynamical properties of the system are first investigated by the ways of bifurcation diagrams, phase portraits and Lyapunov exponents. Then, the normal form technique and perturbation methods are applied so as to the different types of bifurcations that exist in the model are investigated. It is proved that pitcfork, Bogdanov-Takens, Andronov-Hopf bifurcations, degenerate Hopf and homoclinic bifurcation can occur in phase space of the model. Also, the model can exhibit quasiperiodicity and chaotic behavior. Numerical simulations confirm our theoretical analytical results.
NASA Technical Reports Server (NTRS)
Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)
2001-01-01
The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.
NASA Astrophysics Data System (ADS)
Tazik, E.; Jahantab, Z.; Bakhtiari, M.; Rezaei, A.; Kazem Alavipanah, S.
2014-10-01
Landslides are among the most important natural hazards that lead to modification of the environment. Therefore, studying of this phenomenon is so important in many areas. Because of the climate conditions, geologic, and geomorphologic characteristics of the region, the purpose of this study was landslide hazard assessment using Fuzzy Logic, frequency ratio and Analytical Hierarchy Process method in Dozein basin, Iran. At first, landslides occurred in Dozein basin were identified using aerial photos and field studies. The influenced landslide parameters that were used in this study including slope, aspect, elevation, lithology, precipitation, land cover, distance from fault, distance from road and distance from river were obtained from different sources and maps. Using these factors and the identified landslide, the fuzzy membership values were calculated by frequency ratio. Then to account for the importance of each of the factors in the landslide susceptibility, weights of each factor were determined based on questionnaire and AHP method. Finally, fuzzy map of each factor was multiplied to its weight that obtained using AHP method. At the end, for computing prediction accuracy, the produced map was verified by comparing to existing landslide locations. These results indicate that the combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process method are relatively good estimators of landslide susceptibility in the study area. According to landslide susceptibility map about 51% of the occurred landslide fall into the high and very high susceptibility zones of the landslide susceptibility map, but approximately 26 % of them indeed located in the low and very low susceptibility zones.
Reference materials for cellular therapeutics.
Bravery, Christopher A; French, Anna
2014-09-01
The development of cellular therapeutics (CTP) takes place over many years, and, where successful, the developer will anticipate the product to be in clinical use for decades. Successful demonstration of manufacturing and quality consistency is dependent on the use of complex analytical methods; thus, the risk of process and method drift over time is high. The use of reference materials (RM) is an established scientific principle and as such also a regulatory requirement. The various uses of RM in the context of CTP manufacturing and quality are discussed, along with why they are needed for living cell products and the analytical methods applied to them. Relatively few consensus RM exist that are suitable for even common methods used by CTP developers, such as flow cytometry. Others have also identified this need and made proposals; however, great care will be needed to ensure any consensus RM that result are fit for purpose. Such consensus RM probably will need to be applied to specific standardized methods, and the idea that a single RM can have wide applicability is challenged. Written standards, including standardized methods, together with appropriate measurement RM are probably the most appropriate way to define specific starting cell types. The characteristics of a specific CTP will to some degree deviate from those of the starting cells; consequently, a product RM remains the best solution where feasible. Each CTP developer must consider how and what types of RM should be used to ensure the reliability of their own analytical measurements. Copyright © 2014 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics
2016-01-01
Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304
Nonlinear resonances in the ABC-flow
NASA Astrophysics Data System (ADS)
Didov, A. A.; Uleysky, M. Yu.
2018-01-01
In this paper, we study resonances of the ABC-flow in the near integrable case ( C ≪1 ). This is an interesting example of a Hamiltonian system with 3/2 degrees of freedom in which simultaneous existence of two resonances of the same order is possible. Analytical conditions of the resonance existence are received. It is shown numerically that the largest n :1 (n = 1, 2, 3) resonances exist, and their energies are equal to theoretical energies in the near integrable case. We provide analytical and numerical evidences for existence of two branches of the two largest n :1 (n = 1, 2) resonances in the region of finite motion.
Wille, Klaas; Claessens, Michiel; Rappé, Karen; Monteyne, Els; Janssen, Colin R; De Brabander, Hubert F; Vanhaecke, Lynn
2011-12-23
The presence of both pharmaceuticals and pesticides in the aquatic environment has become a well-known environmental issue during the last decade. An increasing demand however still exists for sensitive and reliable monitoring tools for these rather polar contaminants in the marine environment. In recent years, the great potential of passive samplers or equilibrium based sampling techniques for evaluation of the fate of these contaminants has been shown in literature. Therefore, we developed a new analytical method for the quantification of a high number of pharmaceuticals and pesticides in passive sampling devices. The analytical procedure consisted of extraction using 1:1 methanol/acetonitrile followed by detection with ultra-high performance liquid chromatography coupled to high resolution and high mass accuracy Orbitrap mass spectrometry. Validation of the analytical method resulted in limits of quantification and recoveries ranging between 0.2 and 20 ng per sampler sheet and between 87.9 and 105.2%, respectively. Determination of the sampler-water partition coefficients of all compounds demonstrated that several pharmaceuticals and most pesticides exert a high affinity for the polydimethylsiloxane passive samplers. Finally, the developed analytical methods were used to measure the time-weighted average (TWA) concentrations of the targeted pollutants in passive samplers, deployed at eight stations in the Belgian coastal zone. Propranolol, carbamazepine and seven pesticides were found to be very abundant in the passive samplers. These obtained long-term and large-scale TWA concentrations will contribute in assessing the environmental and human health risk of these emerging pollutants. Copyright © 2011 Elsevier B.V. All rights reserved.
Beyramysoltan, Samira; Abdollahi, Hamid; Rajkó, Róbert
2014-05-27
Analytical self-modeling curve resolution (SMCR) methods resolve data sets to a range of feasible solutions using only non-negative constraints. The Lawton-Sylvestre method was the first direct method to analyze a two-component system. It was generalized as a Borgen plot for determining the feasible regions in three-component systems. It seems that a geometrical view is required for considering curve resolution methods, because the complicated (only algebraic) conceptions caused a stop in the general study of Borgen's work for 20 years. Rajkó and István revised and elucidated the principles of existing theory in SMCR methods and subsequently introduced computational geometry tools for developing an algorithm to draw Borgen plots in three-component systems. These developments are theoretical inventions and the formulations are not always able to be given in close form or regularized formalism, especially for geometric descriptions, that is why several algorithms should have been developed and provided for even the theoretical deductions and determinations. In this study, analytical SMCR methods are revised and described using simple concepts. The details of a drawing algorithm for a developmental type of Borgen plot are given. Additionally, for the first time in the literature, equality and unimodality constraints are successfully implemented in the Lawton-Sylvestre method. To this end, a new state-of-the-art procedure is proposed to impose equality constraint in Borgen plots. Two- and three-component HPLC-DAD data set were simulated and analyzed by the new analytical curve resolution methods with and without additional constraints. Detailed descriptions and explanations are given based on the obtained abstract spaces. Copyright © 2014 Elsevier B.V. All rights reserved.
Jin, Chunfen; Viidanoja, Jyrki
2017-01-15
Existing liquid chromatography - mass spectrometry method for the analysis of short chain carboxylic acids was expanded and validated to cover also the measurement of glycerol from oils and fats. The method employs chloride anion attachment and two ions, [glycerol+ 35 Cl] - and [glycerol+ 37 Cl] - , as alternative quantifiers for improved selectivity of glycerol measurement. The averaged within run precision, between run precision and accuracy ranged between 0.3-7%, 0.4-6% and 94-99%, respectively, depending on the analyte ion and sample matrix. Selected renewable diesel feedstocks were analyzed with the method. Copyright © 2016 Elsevier B.V. All rights reserved.
Methods of 14СО2, 13СО2 and 12СО2 detection in gaseous media in real time
NASA Astrophysics Data System (ADS)
Kireev, S. V.; Kondrashov, A. A.; Shnyrev, S. L.; Simanovsky, I. G.
2017-10-01
A comparative analytical review of the existing methods and techniques for measuring 13СО2 and 14СО2 mixed with 12СО2 in gases is provided. It shows that one of the most promising approaches is the method of infrared laser spectroscopy using frequency-tunable diode laser operating near the wavelengths of 4.3 or 2 µm. Measuring near the wavelength of 4.3 µm provides the most accurate results for 13СО2 and 14СО2, but requires more expensive equipment and has complex operation.
Michael, Costas; Bayona, Josep Maria; Lambropoulou, Dimitra; Agüera, Ana; Fatta-Kassinos, Despo
2017-06-01
Occurrence and effects of contaminants of emerging concern pose a special challenge to environmental scientists. The investigation of these effects requires reliable, valid, and comparable analytical data. To this effect, two critical aspects are raised herein, concerning the limitations of the produced analytical data. The first relates to the inherent difficulty that exists in the analysis of environmental samples, which is related to the lack of knowledge (information), in many cases, of the form(s) of the contaminant in which is present in the sample. Thus, the produced analytical data can only refer to the amount of the free contaminant ignoring the amount in which it may be present in other forms; e.g., as in chelated and conjugated form. The other important aspect refers to the way with which the spiking procedure is generally performed to determine the recovery of the analytical method. Spiking environmental samples, in particular solid samples, with standard solution followed by immediate extraction, as is the common practice, can lead to an overestimation of the recovery. This is so, because no time is given to the system to establish possible equilibria between the solid matter-inorganic and/or organic-and the contaminant. Therefore, the spiking procedure need to be reconsidered by including a study of the extractable amount of the contaminant versus the time elapsed between spiking and the extraction of the sample. This study can become an element of the validation package of the method.
An integrative framework for sensor-based measurement of teamwork in healthcare
Rosen, Michael A; Dietz, Aaron S; Yang, Ting; Priebe, Carey E; Pronovost, Peter J
2015-01-01
There is a strong link between teamwork and patient safety. Emerging evidence supports the efficacy of teamwork improvement interventions. However, the availability of reliable, valid, and practical measurement tools and strategies is commonly cited as a barrier to long-term sustainment and spread of these teamwork interventions. This article describes the potential value of sensor-based technology as a methodology to measure and evaluate teamwork in healthcare. The article summarizes the teamwork literature within healthcare, including team improvement interventions and measurement. Current applications of sensor-based measurement of teamwork are reviewed to assess the feasibility of employing this approach in healthcare. The article concludes with a discussion highlighting current application needs and gaps and relevant analytical techniques to overcome the challenges to implementation. Compelling studies exist documenting the feasibility of capturing a broad array of team input, process, and output variables with sensor-based methods. Implications of this research are summarized in a framework for development of multi-method team performance measurement systems. Sensor-based measurement within healthcare can unobtrusively capture information related to social networks, conversational patterns, physical activity, and an array of other meaningful information without having to directly observe or periodically survey clinicians. However, trust and privacy concerns present challenges that need to be overcome through engagement of end users in healthcare. Initial evidence exists to support the feasibility of sensor-based measurement to drive feedback and learning across individual, team, unit, and organizational levels. Future research is needed to refine methods, technologies, theory, and analytical strategies. PMID:25053579
NASA Technical Reports Server (NTRS)
Pines, S.
1981-01-01
The methods used to compute the mass, structural stiffness, and aerodynamic forces in the form of influence coefficient matrices as applied to a flutter analysis of the Drones for Aerodynamic and Structural Testing (DAST) Aeroelastic Research Wing. The DAST wing was chosen because wind tunnel flutter test data and zero speed vibration data of the modes and frequencies exist and are available for comparison. A derivation of the equations of motion that can be used to apply the modal method for flutter suppression is included. A comparison of the open loop flutter predictions with both wind tunnel data and other analytical methods is presented.
Theory of ground state factorization in quantum cooperative systems.
Giampaolo, Salvatore M; Adesso, Gerardo; Illuminati, Fabrizio
2008-05-16
We introduce a general analytic approach to the study of factorization points and factorized ground states in quantum cooperative systems. The method allows us to determine rigorously the existence, location, and exact form of separable ground states in a large variety of, generally nonexactly solvable, spin models belonging to different universality classes. The theory applies to translationally invariant systems, irrespective of spatial dimensionality, and for spin-spin interactions of arbitrary range.
A Study of Natural and Restored Wetland Hydrology
Bayless, E. Randall; Arihood, Leslie D.; Sidle, William C.; Pavlovic, Noel B.
1999-01-01
The U.S. Geological Survey and the U.S. Environmental Protection Agency are jointly studying the hydrology of a long-existing natural wetland and a recently restored wetland in the Kankakee River Valley in northwestern Indiana. In characterizing the two wetlands, project investigators are testing innovative methods to identify the analytical tools best suited for evaluating the success of wetland restoration. Investigators also are examining and comparing the relations between hydrology and restored wetland vegetation.
Molecular imaging of cannabis leaf tissue with MeV-SIMS method
NASA Astrophysics Data System (ADS)
Jenčič, Boštjan; Jeromel, Luka; Ogrinc Potočnik, Nina; Vogel-Mikuš, Katarina; Kovačec, Eva; Regvar, Marjana; Siketić, Zdravko; Vavpetič, Primož; Rupnik, Zdravko; Bučar, Klemen; Kelemen, Mitja; Kovač, Janez; Pelicon, Primož
2016-03-01
To broaden our analytical capabilities with molecular imaging in addition to the existing elemental imaging with micro-PIXE, a linear Time-Of-Flight mass spectrometer for MeV Secondary Ion Mass Spectrometry (MeV-SIMS) was constructed and added to the existing nuclear microprobe at the Jožef Stefan Institute. We measured absolute molecular yields and damage cross-section of reference materials, without significant alteration of the fragile biological samples during the duration of measurements in the mapping mode. We explored the analytical capability of the MeV-SIMS technique for chemical mapping of the plant tissue of medicinal cannabis leaves. A series of hand-cut plant tissue slices were prepared by standard shock-freezing and freeze-drying protocol and deposited on the Si wafer. We show the measured MeV-SIMS spectra showing a series of peaks in the mass area of cannabinoids, as well as their corresponding maps. The indicated molecular distributions at masses of 345.5 u and 359.4 u may be attributed to the protonated THCA and THCA-C4 acids, and show enhancement in the areas with opened trichome morphology.
Modulational instability and discrete breathers in a nonlinear helicoidal lattice model
NASA Astrophysics Data System (ADS)
Ding, Jinmin; Wu, Tianle; Chang, Xia; Tang, Bing
2018-06-01
We investigate the problem on the discrete modulation instability of plane waves and discrete breather modes in a nonlinear helicoidal lattice model, which is described by a discrete nonlinear Schrödinger equation with the first-, second-, and third-neighbor coupling. By means of the linear stability analysis, we present an analytical expression of the instability growth rate and identify the regions of modulational instability of plane waves. It is shown that the introduction of the third-neighbor coupling will affect the shape of the areas of modulational instability significantly. Based on the results obtained by the modulational instability analysis, we predict the existence conditions for the stationary breather modes. Otherwise, by making use of the semidiscrete multiple-scale method, we obtain analytical solutions of discrete breather modes and analyze their properties for different types of nonlinearities. Our results show that the discrete breathers obtained are stable for a long time only when the system exhibits the repulsive nonlinearity. In addition, it is found that the existence of the stable bright discrete breather closely relates to the presence of the third-neighbor coupling.
NASA Astrophysics Data System (ADS)
Ma, Xiyue; Chen, Kean; Ding, Shaohu; Yu, Haoxin
2016-06-01
This paper presents an analytical investigation on physical mechanisms of actively controlling sound transmission through a rib stiffened double-panel structure using point source in the cavity. The combined modal expansion and vibro-acoustic coupling methods are applied to establish the theoretical model of such active structure. Under the condition of minimizing radiated power of the radiating ribbed plate, the physical mechanisms are interpreted in detail from the point of view of modal couplings similar as that used in existed literatures. Results obtained demonstrate that the rule of sound energy transmission and the physical mechanisms for the rib stiffened double-panel structure are all changed, and affected by the coupling effects of the rib when compared with the analytical results obtained for unribbed double-panel case. By taking the coupling effects of the rib into considerations, the cavity modal suppression and rearrangement mechanisms obtained in existed investigations are modified and supplemented for the ribbed plate case, which gives a clear interpretation for the physical nature involved in the active rib stiffened double-panel structure.
NASA Astrophysics Data System (ADS)
Donahue, William; Newhauser, Wayne D.; Ziegler, James F.
2016-09-01
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Donahue, William; Newhauser, Wayne D; Ziegler, James F
2016-09-07
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Calculation of Sensitivity Derivatives in an MDAO Framework
NASA Technical Reports Server (NTRS)
Moore, Kenneth T.
2012-01-01
During gradient-based optimization of a system, it is necessary to generate the derivatives of each objective and constraint with respect to each design parameter. If the system is multidisciplinary, it may consist of a set of smaller "components" with some arbitrary data interconnection and process work ow. Analytical derivatives in these components can be used to improve the speed and accuracy of the derivative calculation over a purely numerical calculation; however, a multidisciplinary system may include both components for which derivatives are available and components for which they are not. Three methods to calculate the sensitivity of a mixed multidisciplinary system are presented: the finite difference method, where the derivatives are calculated numerically; the chain rule method, where the derivatives are successively cascaded along the system's network graph; and the analytic method, where the derivatives come from the solution of a linear system of equations. Some improvements to these methods, to accommodate mixed multidisciplinary systems, are also presented; in particular, a new method is introduced to allow existing derivatives to be used inside of finite difference. All three methods are implemented and demonstrated in the open-source MDAO framework OpenMDAO. It was found that there are advantages to each of them depending on the system being solved.
Samanipour, Saer; Dimitriou-Christidis, Petros; Gros, Jonas; Grange, Aureline; Samuel Arey, J
2015-01-02
Comprehensive two-dimensional gas chromatography (GC×GC) is used widely to separate and measure organic chemicals in complex mixtures. However, approaches to quantify analytes in real, complex samples have not been critically assessed. We quantified 7 PAHs in a certified diesel fuel using GC×GC coupled to flame ionization detector (FID), and we quantified 11 target chlorinated hydrocarbons in a lake water extract using GC×GC with electron capture detector (μECD), further confirmed qualitatively by GC×GC with electron capture negative chemical ionization time-of-flight mass spectrometer (ENCI-TOFMS). Target analyte peak volumes were determined using several existing baseline correction algorithms and peak delineation algorithms. Analyte quantifications were conducted using external standards and also using standard additions, enabling us to diagnose matrix effects. We then applied several chemometric tests to these data. We find that the choice of baseline correction algorithm and peak delineation algorithm strongly influence the reproducibility of analyte signal, error of the calibration offset, proportionality of integrated signal response, and accuracy of quantifications. Additionally, the choice of baseline correction and the peak delineation algorithm are essential for correctly discriminating analyte signal from unresolved complex mixture signal, and this is the chief consideration for controlling matrix effects during quantification. The diagnostic approaches presented here provide guidance for analyte quantification using GC×GC. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Dobrinskaya, Tatiana
2015-01-01
This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command errors. The suggested analytical solution provides a new method of maneuver optimization which is less complicated, automatic and more universal. A maneuver optimization approach, presented in this paper, can be used not only for the ISS, but for other orbiting space vehicles.
Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards
Smith, Justin D.
2013-01-01
This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874
Kumar, B. Vinodh; Mohan, Thuthi
2018-01-01
OBJECTIVE: Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. MATERIALS AND METHODS: This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. RESULTS: For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. CONCLUSION: This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes. PMID:29692587
High precision analytical description of the allowed β spectrum shape
NASA Astrophysics Data System (ADS)
Hayen, Leendert; Severijns, Nathal; Bodek, Kazimierz; Rozpedzik, Dagmara; Mougeot, Xavier
2018-01-01
A fully analytical description of the allowed β spectrum shape is given in view of ongoing and planned measurements. Its study forms an invaluable tool in the search for physics beyond the standard electroweak model and the weak magnetism recoil term. Contributions stemming from finite size corrections, mass effects, and radiative corrections are reviewed. Particular focus is placed on atomic and chemical effects, where the existing description is extended and analytically provided. The effects of QCD-induced recoil terms are discussed, and cross-checks were performed for different theoretical formalisms. Special attention was given to a comparison of the treatment of nuclear structure effects in different formalisms. Corrections were derived for both Fermi and Gamow-Teller transitions, and methods of analytical evaluation thoroughly discussed. In its integrated form, calculated f values were in agreement with the most precise numerical results within the aimed for precision. The need for an accurate evaluation of weak magnetism contributions was stressed, and the possible significance of the oft-neglected induced pseudoscalar interaction was noted. Together with improved atomic corrections, an analytical description was presented of the allowed β spectrum shape accurate to a few parts in 10-4 down to 1 keV for low to medium Z nuclei, thereby extending the work by previous authors by nearly an order of magnitude.
Applications of Raman Spectroscopy in Biopharmaceutical Manufacturing: A Short Review.
Buckley, Kevin; Ryder, Alan G
2017-06-01
The production of active pharmaceutical ingredients (APIs) is currently undergoing its biggest transformation in a century. The changes are based on the rapid and dramatic introduction of protein- and macromolecule-based drugs (collectively known as biopharmaceuticals) and can be traced back to the huge investment in biomedical science (in particular in genomics and proteomics) that has been ongoing since the 1970s. Biopharmaceuticals (or biologics) are manufactured using biological-expression systems (such as mammalian, bacterial, insect cells, etc.) and have spawned a large (>€35 billion sales annually in Europe) and growing biopharmaceutical industry (BioPharma). The structural and chemical complexity of biologics, combined with the intricacy of cell-based manufacturing, imposes a huge analytical burden to correctly characterize and quantify both processes (upstream) and products (downstream). In small molecule manufacturing, advances in analytical and computational methods have been extensively exploited to generate process analytical technologies (PAT) that are now used for routine process control, leading to more efficient processes and safer medicines. In the analytical domain, biologic manufacturing is considerably behind and there is both a huge scope and need to produce relevant PAT tools with which to better control processes, and better characterize product macromolecules. Raman spectroscopy, a vibrational spectroscopy with a number of useful properties (nondestructive, non-contact, robustness) has significant potential advantages in BioPharma. Key among them are intrinsically high molecular specificity, the ability to measure in water, the requirement for minimal (or no) sample pre-treatment, the flexibility of sampling configurations, and suitability for automation. Here, we review and discuss a representative selection of the more important Raman applications in BioPharma (with particular emphasis on mammalian cell culture). The review shows that the properties of Raman have been successfully exploited to deliver unique and useful analytical solutions, particularly for online process monitoring. However, it also shows that its inherent susceptibility to fluorescence interference and the weakness of the Raman effect mean that it can never be a panacea. In particular, Raman-based methods are intrinsically limited by the chemical complexity and wide analyte-concentration-profiles of cell culture media/bioprocessing broths which limit their use for quantitative analysis. Nevertheless, with appropriate foreknowledge of these limitations and good experimental design, robust analytical methods can be produced. In addition, new technological developments such as time-resolved detectors, advanced lasers, and plasmonics offer potential of new Raman-based methods to resolve existing limitations and/or provide new analytical insights.
Single-Cell Detection of Secreted Aβ and sAPPα from Human IPSC-Derived Neurons and Astrocytes.
Liao, Mei-Chen; Muratore, Christina R; Gierahn, Todd M; Sullivan, Sarah E; Srikanth, Priya; De Jager, Philip L; Love, J Christopher; Young-Pearse, Tracy L
2016-02-03
Secreted factors play a central role in normal and pathological processes in every tissue in the body. The brain is composed of a highly complex milieu of different cell types and few methods exist that can identify which individual cells in a complex mixture are secreting specific analytes. By identifying which cells are responsible, we can better understand neural physiology and pathophysiology, more readily identify the underlying pathways responsible for analyte production, and ultimately use this information to guide the development of novel therapeutic strategies that target the cell types of relevance. We present here a method for detecting analytes secreted from single human induced pluripotent stem cell (iPSC)-derived neural cells and have applied the method to measure amyloid β (Aβ) and soluble amyloid precursor protein-alpha (sAPPα), analytes central to Alzheimer's disease pathogenesis. Through these studies, we have uncovered the dynamic range of secretion profiles of these analytes from single iPSC-derived neuronal and glial cells and have molecularly characterized subpopulations of these cells through immunostaining and gene expression analyses. In examining Aβ and sAPPα secretion from single cells, we were able to identify previously unappreciated complexities in the biology of APP cleavage that could not otherwise have been found by studying averaged responses over pools of cells. This technique can be readily adapted to the detection of other analytes secreted by neural cells, which would have the potential to open new perspectives into human CNS development and dysfunction. We have established a technology that, for the first time, detects secreted analytes from single human neurons and astrocytes. We examine secretion of the Alzheimer's disease-relevant factors amyloid β (Aβ) and soluble amyloid precursor protein-alpha (sAPPα) and present novel findings that could not have been observed without a single-cell analytical platform. First, we identify a previously unappreciated subpopulation that secretes high levels of Aβ in the absence of detectable sAPPα. Further, we show that multiple cell types secrete high levels of Aβ and sAPPα, but cells expressing GABAergic neuronal markers are overrepresented. Finally, we show that astrocytes are competent to secrete high levels of Aβ and therefore may be a significant contributor to Aβ accumulation in the brain. Copyright © 2016 the authors 0270-6474/16/361730-17$15.00/0.
NOTE: Solving the ECG forward problem by means of a meshless finite element method
NASA Astrophysics Data System (ADS)
Li, Z. S.; Zhu, S. A.; He, Bin
2007-07-01
The conventional numerical computational techniques such as the finite element method (FEM) and the boundary element method (BEM) require laborious and time-consuming model meshing. The new meshless FEM only uses the boundary description and the node distribution and no meshing of the model is required. This paper presents the fundamentals and implementation of meshless FEM and the meshless FEM method is adapted to solve the electrocardiography (ECG) forward problem. The method is evaluated on a single-layer torso model, in which the analytical solution exists, and tested in a realistic geometry homogeneous torso model, with satisfactory results being obtained. The present results suggest that the meshless FEM may provide an alternative for ECG forward solutions.
Research study on high energy radiation effect and environment solar cell degradation methods
NASA Technical Reports Server (NTRS)
Horne, W. E.; Wilkinson, M. C.
1974-01-01
The most detailed and comprehensively verified analytical model was used to evaluate the effects of simplifying assumptions on the accuracy of predictions made by the external damage coefficient method. It was found that the most serious discrepancies were present in heavily damaged cells, particularly proton damaged cells, in which a gradient in damage across the cell existed. In general, it was found that the current damage coefficient method tends to underestimate damage at high fluences. An exception to this rule was thick cover-slipped cells experiencing heavy degradation due to omnidirectional electrons. In such cases, the damage coefficient method overestimates the damage. Comparisons of degradation predictions made by the two methods and measured flight data confirmed the above findings.
Dawson, Verdel K.; Meinertz, Jeffery R.; Schmidt, Larry J.; Gingerich, William H.
2003-01-01
Concentrations of chloramine-T must be monitored during experimental treatments of fish when studying the effectiveness of the drug for controlling bacterial gill disease. A surrogate analytical method for analysis of chloramine-T to replace the existing high-performance liquid chromatography (HPLC) method is described. A surrogate method was needed because the existing HPLC method is expensive, requires a specialist to use, and is not generally available at fish hatcheries. Criteria for selection of a replacement method included ease of use, analysis time, cost, safety, sensitivity, accuracy, and precision. The most promising approach was to use the determination of chlorine concentrations as an indicator of chloramine-T. Of the currently available methods for analysis of chlorine, the DPD (N,N-diethyl-p-phenylenediamine) colorimetric method best fit the established criteria. The surrogate method was evaluated under a variety of water quality conditions. Regression analysis of all DPD colorimetric analyses with the HPLC values produced a linear model (Y=0.9602 X+0.1259) with an r2 value of 0.9960. The average accuracy (percent recovery) of the DPD method relative to the HPLC method for the combined set of water quality data was 101.5%. The surrogate method was also evaluated with chloramine-T solutions that contained various concentrations of fish feed or selected densities of rainbow trout. When samples were analyzed within 2 h, the results of the surrogate method were consistent with those of the HPLC method. When samples with high concentrations of organic material were allowed to age more than 2 h before being analyzed, the DPD method seemed to be susceptible to interference, possibly from the development of other chloramine compounds. However, even after aging samples 6 h, the accuracy of the surrogate DPD method relative to the HPLC method was within the range of 80–120%. Based on the data comparing the two methods, the U.S. Food and Drug Administration has concluded that the DPD colorimetric method is appropriate to use to measure chloramine-T in water during pivotal efficacy trials designed to support the approval of chloramine-T for use in fish culture.
NASA Astrophysics Data System (ADS)
Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.
2017-09-01
Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.
Stokes, Caroline S; Lammert, Frank; Volmer, Dietrich A
2018-02-01
A plethora of contradictory research surrounds vitamin D and its influence on health and disease. This may, in part, result from analytical difficulties with regard to measuring vitamin D metabolites in serum. Indeed, variation exists between analytical techniques and assays used for the determination of serum 25-hydroxyvitamin D. Research studies into the effects of vitamin D on clinical endpoints rely heavily on the accurate assessment of vitamin D status. This has important implications, as findings from vitamin D-related studies to date may potentially have been hampered by the quantification techniques used. Likewise, healthcare professionals are increasingly incorporating vitamin D testing and supplementation regimens into their practice, and measurement errors may be also confounding the clinical decisions. Importantly, the Vitamin D Standardisation Programme is an initiative that aims to standardise the measurement of vitamin D metabolites. Such a programme is anticipated to eliminate the inaccuracies surrounding vitamin D quantification. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm
2015-01-01
Within the mosaic display of international anti-doping efforts, analytical strategies based on up-to-date instrumentation as well as most recent information about physiology, pharmacology, metabolism, etc., of prohibited substances and methods of doping are indispensable. The continuous emergence of new chemical entities and the identification of arguably beneficial effects of established or even obsolete drugs on endurance, strength, and regeneration, necessitate frequent and adequate adaptations of sports drug testing procedures. These largely rely on exploiting new technologies, extending the substance coverage of existing test protocols, and generating new insights into metabolism, distribution, and elimination of compounds prohibited by the World Anti-Doping Agency (WADA). In reference of the content of the 2014 Prohibited List, literature concerning human sports drug testing that was published between October 2013 and September 2014 is summarized and reviewed in this annual banned-substance review, with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2014 John Wiley & Sons, Ltd.
The Human is the Loop: New Directions for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Hossain, Shahriar H.; Ramakrishnan, Naren
2014-01-28
Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.
Deployment of Analytics into the Healthcare Safety Net: Lessons Learned
Hartzband, David; Jacobs, Feygele
2016-01-01
Background As payment reforms shift healthcare reimbursement toward value-based payment programs, providers need the capability to work with data of greater complexity, scope and scale. This will in many instances necessitate a change in understanding of the value of data, and the types of data needed for analysis to support operations and clinical practice. It will also require the deployment of different infrastructure and analytic tools. Community health centers, which serve more than 25 million people and together form the nation’s largest single source of primary care for medically underserved communities and populations, are expanding and will need to optimize their capacity to leverage data as new payer and organizational models emerge. Methods To better understand existing capacity and help organizations plan for the strategic and expanded uses of data, a project was initiated that deployed contemporary, Hadoop-based, analytic technology into several multi-site community health centers (CHCs) and a primary care association (PCA) with an affiliated data warehouse supporting health centers across the state. An initial data quality exercise was carried out after deployment, in which a number of analytic queries were executed using both the existing electronic health record (EHR) applications and in parallel, the analytic stack. Each organization carried out the EHR analysis using the definitions typically applied for routine reporting. The analysis deploying the analytic stack was carried out using those common definitions established for the Uniform Data System (UDS) by the Health Resources and Service Administration.1 In addition, interviews with health center leadership and staff were completed to understand the context for the findings. Results The analysis uncovered many challenges and inconsistencies with respect to the definition of core terms (patient, encounter, etc.), data formatting, and missing, incorrect and unavailable data. At a population level, apparent underreporting of a number of diagnoses, specifically obesity and heart disease, was also evident in the results of the data quality exercise, for both the EHR-derived and stack analytic results. Conclusion Data awareness, that is, an appreciation of the importance of data integrity, data hygiene2 and the potential uses of data, needs to be prioritized and developed by health centers and other healthcare organizations if analytics are to be used in an effective manner to support strategic objectives. While this analysis was conducted exclusively with community health center organizations, its conclusions and recommendations may be more broadly applicable. PMID:28210424
On the critical forcing amplitude of forced nonlinear oscillators
NASA Astrophysics Data System (ADS)
Febbo, Mariano; Ji, Jinchen C.
2013-12-01
The steady-state response of forced single degree-of-freedom weakly nonlinear oscillators under primary resonance conditions can exhibit saddle-node bifurcations, jump and hysteresis phenomena, if the amplitude of the excitation exceeds a certain value. This critical value of excitation amplitude or critical forcing amplitude plays an important role in determining the occurrence of saddle-node bifurcations in the frequency-response curve. This work develops an alternative method to determine the critical forcing amplitude for single degree-of-freedom nonlinear oscillators. Based on Lagrange multipliers approach, the proposed method considers the calculation of the critical forcing amplitude as an optimization problem with constraints that are imposed by the existence of locations of vertical tangency. In comparison with the Gröbner basis method, the proposed approach is more straightforward and thus easy to apply for finding the critical forcing amplitude both analytically and numerically. Three examples are given to confirm the validity of the theoretical predictions. The first two present the analytical form for the critical forcing amplitude and the third one is an example of a numerically computed solution.
Role of short-range correlation in facilitation of wave propagation in a long-range ladder chain
NASA Astrophysics Data System (ADS)
Farzadian, O.; Niry, M. D.
2018-09-01
We extend a new method for generating a random chain, which has a kind of short-range correlation induced by a repeated sequence while retaining long-range correlation. Three distinct methods are considered to study the localization-delocalization transition of mechanical waves in one-dimensional disordered media with simultaneous existence of short and long-range correlation. First, a transfer-matrix method was used to calculate numerically the localization length of a wave in a binary chain. We found that the existence of short-range correlation in a long-range correlated chain can increase the localization length at the resonance frequency Ωc. Then, we carried out an analytical study of the delocalization properties of the waves in correlated disordered media around Ωc. Finally, we apply a dynamical method based on the direct numerical simulation of the wave equation to study the propagation of waves in the correlated chain. Imposing short-range correlation on the long-range background will lead the propagation to super-diffusive transport. The results obtained with all three methods are in agreement with each other.
Selection of remedial alternatives for mine sites: a multicriteria decision analysis approach.
Betrie, Getnet D; Sadiq, Rehan; Morin, Kevin A; Tesfamariam, Solomon
2013-04-15
The selection of remedial alternatives for mine sites is a complex task because it involves multiple criteria and often with conflicting objectives. However, an existing framework used to select remedial alternatives lacks multicriteria decision analysis (MCDA) aids and does not consider uncertainty in the selection of alternatives. The objective of this paper is to improve the existing framework by introducing deterministic and probabilistic MCDA methods. The Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) methods have been implemented in this study. The MCDA analysis involves processing inputs to the PROMETHEE methods that are identifying the alternatives, defining the criteria, defining the criteria weights using analytical hierarchical process (AHP), defining the probability distribution of criteria weights, and conducting Monte Carlo Simulation (MCS); running the PROMETHEE methods using these inputs; and conducting a sensitivity analysis. A case study was presented to demonstrate the improved framework at a mine site. The results showed that the improved framework provides a reliable way of selecting remedial alternatives as well as quantifying the impact of different criteria on selecting alternatives. Copyright © 2013 Elsevier Ltd. All rights reserved.
Determination of vertical pressures on running wheels of freight trolleys of bridge type cranes
NASA Astrophysics Data System (ADS)
Goncharov, K. A.; Denisov, I. A.
2018-03-01
The problematic issues of the design of the bridge-type trolley crane, connected with ensuring uniform load distribution between the running wheels, are considered. The shortcomings of the existing methods of calculation of reference pressures are described. The results of the analytical calculation of the pressure of the support wheels are compared with the results of the numerical solution of this problem for various schemes of trolley supporting frames. Conclusions are given on the applicability of various methods for calculating vertical pressures, depending on the type of metal structures used in the trolley.
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
2014-01-01
Background As a part of the longitudinal Chronic Obstructive Pulmonary Disease (COPD) study, Subpopulations and Intermediate Outcome Measures in COPD study (SPIROMICS), blood samples are being collected from 3200 subjects with the goal of identifying blood biomarkers for sub-phenotyping patients and predicting disease progression. To determine the most reliable sample type for measuring specific blood analytes in the cohort, a pilot study was performed from a subset of 24 subjects comparing serum, Ethylenediaminetetraacetic acid (EDTA) plasma, and EDTA plasma with proteinase inhibitors (P100™). Methods 105 analytes, chosen for potential relevance to COPD, arranged in 12 multiplex and one simplex platform (Myriad-RBM) were evaluated in duplicate from the three sample types from 24 subjects. The reliability coefficient and the coefficient of variation (CV) were calculated. The performance of each analyte and mean analyte levels were evaluated across sample types. Results 20% of analytes were not consistently detectable in any sample type. Higher reliability and/or smaller CV were determined for 12 analytes in EDTA plasma compared to serum, and for 11 analytes in serum compared to EDTA plasma. While reliability measures were similar for EDTA plasma and P100 plasma for a majority of analytes, CV was modestly increased in P100 plasma for eight analytes. Each analyte within a multiplex produced independent measurement characteristics, complicating selection of sample type for individual multiplexes. Conclusions There were notable detectability and measurability differences between serum and plasma. Multiplexing may not be ideal if large reliability differences exist across analytes measured within the multiplex, especially if values differ based on sample type. For some analytes, the large CV should be considered during experimental design, and the use of duplicate and/or triplicate samples may be necessary. These results should prove useful for studies evaluating selection of samples for evaluation of potential blood biomarkers. PMID:24397870
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-01-01
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000
NASA Astrophysics Data System (ADS)
Heuzé, Thomas
2017-10-01
We present in this work two finite volume methods for the simulation of unidimensional impact problems, both for bars and plane waves, on elastic-plastic solid media within the small strain framework. First, an extension of Lax-Wendroff to elastic-plastic constitutive models with linear and nonlinear hardenings is presented. Second, a high order TVD method based on flux-difference splitting [1] and Superbee flux limiter [2] is coupled with an approximate elastic-plastic Riemann solver for nonlinear hardenings, and follows that of Fogarty [3] for linear ones. Thermomechanical coupling is accounted for through dissipation heating and thermal softening, and adiabatic conditions are assumed. This paper essentially focuses on one-dimensional problems since analytical solutions exist or can easily be developed. Accordingly, these two numerical methods are compared to analytical solutions and to the explicit finite element method on test cases involving discontinuous and continuous solutions. This allows to study in more details their respective performance during the loading, unloading and reloading stages. Particular emphasis is also paid to the accuracy of the computed plastic strains, some differences being found according to the numerical method used. Lax-Wendoff two-dimensional discretization of a one-dimensional problem is also appended at the end to demonstrate the extensibility of such numerical scheme to multidimensional problems.
NASA Astrophysics Data System (ADS)
Renno, A. D.; Merchel, S.; Michalak, P. P.; Munnik, F.; Wiedenbeck, M.
2010-12-01
Recent economic trends regarding the supply of rare metals readily justify scientific research into non-conventional raw materials, where a particular need is a better understanding of the relationship between mineralogy, microstructure and the distribution of key metals within ore deposits (geometallurgy). Achieving these goals will require an extensive usage of in-situ microanalytical techniques capable of spatially resolving material heterogeneities which can be key for understanding better resource utilization. The availability of certified reference materials (CRMs) is an essential prerequisite for (1) validating new analytical methods, (2) demonstrating data quality to the contracting authorities, (3) supporting method development and instrument calibration, and (4) establishing traceability between new analytical approaches and existing data sets. This need has led to the granting of funding by the European Union and the German Free State of Saxony for a program to develop such reference materials . This effort will apply the following strategies during the selection of the phases: (1) will use exclusively synthetic minerals, thereby providing large volumes of homogeneous starting material. (2) will focus on matrices which are capable of incorporating many ‘important’ elements while avoid exotic compositions which would not be optimal matrix matches. (3) will emphasise those phases which remain stable during the various microanalytical procedure. This initiative will assess the homogeneity of the reference materials at sampling sizes ranging between 50 and 1 µm; it is also intended to document crystal structural homogeneity too, as this too may potentially impact specific analytical methods. As far as possible both definitive methods as well as methods involving matrix corrections will be used for determining the compositions of the of the individual materials. A critical challenge will be the validation of the determination of analytes concentrations as sub-µg sampling masses. It is planned to cooperate with those who are interested in the development of such reference materials and we invite them to take part in round-robin exercises.
Learning Analytics for Online Discussions: Embedded and Extracted Approaches
ERIC Educational Resources Information Center
Wise, Alyssa Friend; Zhao, Yuting; Hausknecht, Simone Nicole
2014-01-01
This paper describes an application of learning analytics that builds on an existing research program investigating how students contribute and attend to the messages of others in asynchronous online discussions. We first overview the E-Listening research program and then explain how this work was translated into analytics that students and…
Big data analytics as a service infrastructure: challenges, desired properties and solutions
NASA Astrophysics Data System (ADS)
Martín-Márquez, Manuel
2015-12-01
CERN's accelerator complex generates a very large amount of data. A large volumen of heterogeneous data is constantly generated from control equipment and monitoring agents. These data must be stored and analysed. Over the decades, CERN's researching and engineering teams have applied different approaches, techniques and technologies for this purpose. This situation has minimised the necessary collaboration and, more relevantly, the cross data analytics over different domains. These two factors are essential to unlock hidden insights and correlations between the underlying processes, which enable better and more efficient daily-based accelerator operations and more informed decisions. The proposed Big Data Analytics as a Service Infrastructure aims to: (1) integrate the existing developments; (2) centralise and standardise the complex data analytics needs for CERN's research and engineering community; (3) deliver real-time, batch data analytics and information discovery capabilities; and (4) provide transparent access and Extract, Transform and Load (ETL), mechanisms to the various and mission-critical existing data repositories. This paper presents the desired objectives and properties resulting from the analysis of CERN's data analytics requirements; the main challenges: technological, collaborative and educational and; potential solutions.
Solitary waves and double layers in a dusty electronegative plasma.
Mamun, A A; Shukla, P K; Eliasson, B
2009-10-01
A dusty electronegative plasma containing Boltzmann electrons, Boltzmann negative ions, cold mobile positive ions, and negatively charged stationary dust has been considered. The basic features of arbitrary amplitude solitary waves (SWs) and double layers (DLs), which have been found to exist in such a dusty electronegative plasma, have been investigated by the pseudopotential method. The small amplitude limit has also been considered in order to study the small amplitude SWs and DLs analytically. It has been shown that under certain conditions, DLs do not exist, which is in good agreement with the experimental observations of Ghim and Hershkowitz [Y. Ghim (Kim) and N. Hershkowitz, Appl. Phys. Lett. 94, 151503 (2009)].
Klous, Miriam; Klous, Sander
2010-07-01
The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.
Saito, I; Shibata, E; Huang, J; Hisanaga, N; Ono, Y; Takeuchi, Y
1991-01-01
2,5-Hexanedione is a main metabolite of n-hexane and is considered as the cause of n-hexane polyneuropathy. Therefore, it is useful to measure 2,5-hexanedione for biological monitoring of exposure to n-hexane. The analytical methods existing for n-hexane metabolites, however, were controversial and not established enough. Hence, a simple and precise method for determination of urinary 2,5-hexanedione has been developed. Five ml of urine was acidified to pH 0.5 with concentrated hydrochloric acid and heated for 30 minutes at 90-100 degrees C. After cooling in water, sodium chloride and dichloromethane containing internal standard were added. The sample was shaken and centrifuged. 2,5-Hexanedione concentration in an aliquot of dichloromethane extract was quantified by gas chromatography using a widebore column (DB-1701). Urinary concentration of 2,5-hexanedione showed a good correlation with exposure to n-hexane (n = 50, r = 0.973, p less than 0.001). This method is simple and precise for analysis of urinary 2,5-hexanedione as an index of exposure to n-hexane. PMID:1878315
Spatially-explicit models of global tree density.
Glick, Henry B; Bettigole, Charlie; Maynard, Daniel S; Covey, Kristofer R; Smith, Jeffrey R; Crowther, Thomas W
2016-08-16
Remote sensing and geographic analysis of woody vegetation provide means of evaluating the distribution of natural resources, patterns of biodiversity and ecosystem structure, and socio-economic drivers of resource utilization. While these methods bring geographic datasets with global coverage into our day-to-day analytic spheres, many of the studies that rely on these strategies do not capitalize on the extensive collection of existing field data. We present the methods and maps associated with the first spatially-explicit models of global tree density, which relied on over 420,000 forest inventory field plots from around the world. This research is the result of a collaborative effort engaging over 20 scientists and institutions, and capitalizes on an array of analytical strategies. Our spatial data products offer precise estimates of the number of trees at global and biome scales, but should not be used for local-level estimation. At larger scales, these datasets can contribute valuable insight into resource management, ecological modelling efforts, and the quantification of ecosystem services.
Comparison of Gluten Extraction Protocols Assessed by LC-MS/MS Analysis.
Fallahbaghery, Azadeh; Zou, Wei; Byrne, Keren; Howitt, Crispin A; Colgrave, Michelle L
2017-04-05
The efficiency of gluten extraction is of critical importance to the results derived from any analytical method for gluten detection and quantitation, whether it employs reagent-based technology (antibodies) or analytical instrumentation (mass spectrometry). If the target proteins are not efficiently extracted, the end result will be an under-estimation in the gluten content posing a health risk to people affected by conditions such as celiac disease (CD) and nonceliac gluten sensitivity (NCGS). Five different extraction protocols were investigated using LC-MRM-MS for their ability to efficiently and reproducibly extract gluten. The rapid and simple "IPA/DTT" protocol and related "two-step" protocol were enriched for gluten proteins, 55/86% (trypsin/chymotrypsin) and 41/68% of all protein identifications, respectively, with both methods showing high reproducibility (CV < 15%). When using multistep protocols, it was critical to examine all fractions, as coextraction of proteins occurred across fractions, with significant levels of proteins existing in unexpected fractions and not all proteins within a particular gluten class behaving the same.
NASA Astrophysics Data System (ADS)
Macías-Díaz, J. E.
In the present manuscript, we introduce a finite-difference scheme to approximate solutions of the two-dimensional version of Fisher's equation from population dynamics, which is a model for which the existence of traveling-wave fronts bounded within (0,1) is a well-known fact. The method presented here is a nonstandard technique which, in the linear regime, approximates the solutions of the original model with a consistency of second order in space and first order in time. The theory of M-matrices is employed here in order to elucidate conditions under which the method is able to preserve the positivity and the boundedness of solutions. In fact, our main result establishes relatively flexible conditions under which the preservation of the positivity and the boundedness of new approximations is guaranteed. Some simulations of the propagation of a traveling-wave solution confirm the analytical results derived in this work; moreover, the experiments evince a good agreement between the numerical result and the analytical solutions.
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
Haller, Toomas; Leitsalu, Liis; Fischer, Krista; Nuotio, Marja-Liisa; Esko, Tõnu; Boomsma, Dorothea Irene; Kyvik, Kirsten Ohm; Spector, Tim D; Perola, Markus; Metspalu, Andres
2017-01-01
Ancestry information at the individual level can be a valuable resource for personalized medicine, medical, demographical and history research, as well as for tracing back personal history. We report a new method for quantitatively determining personal genetic ancestry based on genome-wide data. Numerical ancestry component scores are assigned to individuals based on comparisons with reference populations. These comparisons are conducted with an existing analytical pipeline making use of genotype phasing, similarity matrix computation and our addition-multidimensional best fitting by MixFit. The method is demonstrated by studying Estonian and Finnish populations in geographical context. We show the main differences in the genetic composition of these otherwise close European populations and how they have influenced each other. The components of our analytical pipeline are freely available computer programs and scripts one of which was developed in house (available at: www.geenivaramu.ee/en/tools/mixfit).
Xin, F X; Lu, T J
2009-03-01
The air-borne sound insulation performance of a rectangular double-panel partition clamp mounted on an infinite acoustic rigid baffle is investigated both analytically and experimentally and compared with that of a simply supported one. With the clamped (or simply supported) boundary accounted for by using the method of modal function, a double series solution for the sound transmission loss (STL) of the structure is obtained by employing the weighted residual (Galerkin) method. Experimental measurements with Al double-panel partitions having air cavity are subsequently carried out to validate the theoretical model for both types of the boundary condition, and good overall agreement is achieved. A consistency check of the two different models (based separately on clamped modal function and simply supported modal function) is performed by extending the panel dimensions to infinite where no boundaries exist. The significant discrepancies between the two different boundary conditions are demonstrated in terms of the STL versus frequency plots as well as the panel deflection mode shapes.
Maximal analytic extension and hidden symmetries of the dipole black ring
NASA Astrophysics Data System (ADS)
Armas, Jay
2011-12-01
We construct analytic extensions across the Killing horizons of non-extremal and extremal dipole black rings in Einstein-Maxwell’s theory using different methods. We show that these extensions are non-globally hyperbolic, have multiple asymptotically flat regions and, in the non-extremal case, are also maximal and timelike complete. Moreover, we find that in both cases, the causal structure of the maximally extended spacetime resembles that of the four-dimensional Reissner-Nordström black hole. Furthermore, motivated by the physical interpretation of one of these extensions, we find a separable solution to the Hamilton-Jacobi equation corresponding to zero energy null geodesics and relate it to the existence of a conformal Killing tensor and a conformal Killing-Yano tensor in a specific dimensionally reduced spacetime.
2016-01-01
Family Policy’s SECO program, which reviewed existing SECO metrics and data sources, as well as analytic methods of previ- ous research, to determine ...process that requires an iterative cycle of assessment of collected data (typically, but not solely, quantitative data) to determine whether SECO...RAND suggests five steps to develop and implement the SECO inter- nal monitoring system: Step 1. Describe the logic or theory of how activities are
Mechanics of fiber reinforced materials
NASA Astrophysics Data System (ADS)
Sun, Huiyu
This dissertation is dedicated to mechanics of fiber reinforced materials and the woven reinforcement and composed of four parts of research: analytical characterization of the interfaces in laminated composites; micromechanics of braided composites; shear deformation, and Poisson's ratios of woven fabric reinforcements. A new approach to evaluate the mechanical characteristics of interfaces between composite laminae based on a modified laminate theory is proposed. By including an interface as a special lamina termed the "bonding-layer" in the analysis, the mechanical properties of the interfaces are obtained. A numerical illustration is given. For micro-mechanical properties of three-dimensionally braided composite materials, a new method via homogenization theory and incompatible multivariable FEM is developed. Results from the hybrid stress element approach compare more favorably with the experimental data than other existing numerical methods widely used. To evaluate the shearing properties for woven fabrics, a new mechanical model is proposed during the initial slip region. Analytical results show that this model provides better agreement with the experiments for both the initial shear modulus and the slipping angle than the existing models. Finally, another mechanical model for a woven fabric made of extensible yarns is employed to calculate the fabric Poisson's ratios. Theoretical results are compared with the available experimental data. A thorough examination on the influences of various mechanical properties of yarns and structural parameters of fabrics on the Poisson's ratios of a woven fabric is given at the end.
Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem
NASA Astrophysics Data System (ADS)
Minesaki, Yukitaka
2018-04-01
We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
Analytical Method to Evaluate Failure Potential During High-Risk Component Development
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Clancy, Daniel (Technical Monitor)
2001-01-01
Communicating failure mode information during design and manufacturing is a crucial task for failure prevention. Most processes use Failure Modes and Effects types of analyses, as well as prior knowledge and experience, to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. This paper makes use of similarities and tradeoffs that exist between different failure modes based on the functionality of each component/product. In this light, a function-failure method is developed to help the design of new products with solutions for functions that eliminate or reduce the potential of a failure mode. The method is applied to a simplified rotating machinery example in this paper, and is proposed as a means to account for helicopter failure modes during design and production, addressing stringent safety and performance requirements for NASA applications.
Reevaluation of air surveillance station siting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, K.; Jannik, T.
2016-07-06
DOE Technical Standard HDBK-1216-2015 (DOE 2015) recommends evaluating air-monitoring station placement using the analytical method developed by Waite. The technique utilizes wind rose and population distribution data in order to determine a weighting factor for each directional sector surrounding a nuclear facility. Based on the available resources (number of stations) and a scaling factor, this weighting factor is used to determine the number of stations recommended to be placed in each sector considered. An assessment utilizing this method was performed in 2003 to evaluate the effectiveness of the existing SRS air-monitoring program. The resulting recommended distribution of air-monitoring stations wasmore » then compared to that of the existing site perimeter surveillance program. The assessment demonstrated that the distribution of air-monitoring stations at the time generally agreed with the results obtained using the Waite method; however, at the time new stations were established in Barnwell and in Williston in order to meet requirements of DOE guidance document EH-0173T.« less
Minimum maximum temperature gradient coil design.
While, Peter T; Poole, Michael S; Forbes, Larry K; Crozier, Stuart
2013-08-01
Ohmic heating is a serious problem in gradient coil operation. A method is presented for redesigning cylindrical gradient coils to operate at minimum peak temperature, while maintaining field homogeneity and coil performance. To generate these minimaxT coil windings, an existing analytic method for simulating the spatial temperature distribution of single layer gradient coils is combined with a minimax optimization routine based on sequential quadratic programming. Simulations are provided for symmetric and asymmetric gradient coils that show considerable improvements in reducing maximum temperature over existing methods. The winding patterns of the minimaxT coils were found to be heavily dependent on the assumed thermal material properties and generally display an interesting "fish-eye" spreading of windings in the dense regions of the coil. Small prototype coils were constructed and tested for experimental validation and these demonstrate that with a reasonable estimate of material properties, thermal performance can be improved considerably with negligible change to the field error or standard figures of merit. © 2012 Wiley Periodicals, Inc.
Bourget, P; Amin, A; Vidal, F; Merlette, C; Troude, P; Corriol, O
2013-09-01
In France, central IV admixture of chemotherapy (CT) treatments at the hospital is now required by law. We have previously shown that the shaping of Therapeutic Objects (TOs) could profit from an Analytical Quality Assurance (AQA), closely linked to the batch release, for the three key parameters: identity, purity, and initial concentration of the compound of interest. In the course of recent and diversified works, we showed the technical superiority of non-intrusive Raman Spectroscopy (RS) vs. any other analytical option and, especially for both HPLC and vibrational method using a UV/visible-FTIR coupling. An interconnected qualitative and economic assessment strongly helps to enrich these relevant works. The study compares in operational situation, the performance of three analytical methods used for the AQC of TOs. We used: a) a set of evaluation criteria, b) the depreciation tables of the machinery, c) the cost of disposables, d) the weight of equipment and technical installations, e) the basic accounting unit (unit of work) and its composite costs (Euros), which vary according to the technical options, the weight of both human resources and disposables; finally, different combinations are described. So, the unit of work can take 12 different values between 1 and 5.5 Euros, and we provide various recommendations. A qualitative evaluation grid constantly places the SR technology as superior or equal to the 2 other techniques currently available. Our results demonstrated: a) the major interest of the non-intrusive AQC performed by RS, especially when it is not possible to analyze a TO with existing methods e.g. elastomeric portable pumps, and b) the high potential for this technique to be a strong contributor to the security of the medication circuit, and to fight the iatrogenic effects of drugs especially in the hospital. It also contributes to the protection of all actors in healthcare and of their working environment.
NASA Technical Reports Server (NTRS)
McManus, Hugh L.; Chamis, Christos C.
1996-01-01
This report describes analytical methods for calculating stresses and damage caused by degradation of the matrix constituent in polymer matrix composite materials. Laminate geometry, material properties, and matrix degradation states are specified as functions of position and time. Matrix shrinkage and property changes are modeled as functions of the degradation states. The model is incorporated into an existing composite mechanics computer code. Stresses, strains, and deformations at the laminate, ply, and micro levels are calculated, and from these calculations it is determined if there is failure of any kind. The rationale for the model (based on published experimental work) is presented, its integration into the laminate analysis code is outlined, and example results are given, with comparisons to existing material and structural data. The mechanisms behind the changes in properties and in surface cracking during long-term aging of polyimide matrix composites are clarified. High-temperature-material test methods are also evaluated.
Literature Review of the Extraction and Analysis of Trace Contaminants in Food
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Audrey Martin; Alcaraz, Armando
2010-06-15
There exists a serious concern that chemical warfare agents (CWA) may be used in a terrorist attack against military or civilian populations. While many precautions have been taken on the military front (e.g. protective clothing, gas masks), such precautions are not suited for the widespread application to civilian populations. Thus, defense of the civilian population, and applicable to the military population, has focused on prevention and early detection. Early detection relies on accurate and sensitive analytical methods to detect and identify CWA in a variety of matrices. Once a CWA is detected, the analytical needs take on a forensic applicationmore » – are there any chemical signatures present in the sample that could indicate its source? These signatures could include byproducts of the reaction, unreacted starting materials, degradation products, or impurities. Therefore, it is important that the analytical method used can accurately identify such signatures, as well as the CWA itself. Contained herein is a review of the open literature describing the detection of CWA in various matrices and the detection of trace toxic chemicals in food. Several relevant reviews have been published in the literature,1-5 including a review of analytical separation techniques for CWA by Hooijschuur et al.1 The current review is not meant to reiterate the published manuscripts; is focused mainly on extraction procedures, as well as the detection of VX and its hydrolysis products, as it is closely related to Russian VX, which is not prevalent in the literature. It is divided by the detection technique used, as such; extraction techniques are included with each detection method.« less
Cretini, Kari F.; Visser, Jenneke M.; Krauss, Ken W.; Steyer, Gregory D.
2011-01-01
This document identifies the main objectives of the Coastwide Reference Monitoring System (CRMS) vegetation analytical team, which are to provide (1) collection and development methods for vegetation response variables and (2) the ways in which these response variables will be used to evaluate restoration project effectiveness. The vegetation parameters (that is, response variables) collected in CRMS and other coastal restoration projects funded under the Coastal Wetlands Planning, Protection and Restoration Act (CWPPRA) are identified, and the field collection methods for these parameters are summarized. Existing knowledge on community and plant responses to changes in environmental drivers (for example, flooding and salinity) from published literature and from the CRMS and CWPPRA monitoring dataset are used to develop a suite of indices to assess wetland condition in coastal Louisiana. Two indices, the floristic quality index (FQI) and a productivity index, are described for herbaceous and forested vegetation. The FQI for herbaceous vegetation is tested with a long-term dataset from a CWPPRA marsh creation project. Example graphics for this index are provided and discussed. The other indices, an FQI for forest vegetation (that is, trees and shrubs) and productivity indices for herbaceous and forest vegetation, are proposed but not tested. New response variables may be added or current response variables removed as data become available and as our understanding of restoration success indicators develops. Once indices are fully developed, each will be used by the vegetation analytical team to assess and evaluate CRMS/CWPPRA project and program effectiveness. The vegetation analytical teams plan to summarize their results in the form of written reports and/or graphics and present these items to CRMS Federal and State sponsors, restoration project managers, landowners, and other data users for their input.
NASA Astrophysics Data System (ADS)
Smith, Katharine A.; Schlag, Zachary; North, Elizabeth W.
2018-07-01
Coupled three-dimensional circulation and biogeochemical models predict changes in water properties that can be used to define fish habitat, including physiologically important parameters such as temperature, salinity, and dissolved oxygen. However, methods for calculating the volume of habitat defined by the intersection of multiple water properties are not well established for coupled three-dimensional models. The objectives of this research were to examine multiple methods for calculating habitat volume from three-dimensional model predictions, select the most robust approach, and provide an example application of the technique. Three methods were assessed: the "Step," "Ruled Surface", and "Pentahedron" methods, the latter of which was developed as part of this research. Results indicate that the analytical Pentahedron method is exact, computationally efficient, and preserves continuity in water properties between adjacent grid cells. As an example application, the Pentahedron method was implemented within the Habitat Volume Model (HabVol) using output from a circulation model with an Arakawa C-grid and physiological tolerances of juvenile striped bass (Morone saxatilis). This application demonstrates that the analytical Pentahedron method can be successfully applied to calculate habitat volume using output from coupled three-dimensional circulation and biogeochemical models, and it indicates that the Pentahedron method has wide application to aquatic and marine systems for which these models exist and physiological tolerances of organisms are known.
Optimization of the Determination Method for Dissolved Cyanobacterial Toxin BMAA in Natural Water.
Yan, Boyin; Liu, Zhiquan; Huang, Rui; Xu, Yongpeng; Liu, Dongmei; Lin, Tsair-Fuh; Cui, Fuyi
2017-10-17
There is a serious dispute on the existence of β-N-methylamino-l-alanine (BMAA) in water, which is a neurotoxin that may cause amyotrophic lateral sclerosis/Parkinson's disease (ALS/PDC) and Alzheimer' disease. It is believed that a reliable and sensitive analytical method for the determination of BMAA is urgently required to resolve this dispute. In the present study, the solid phase extraction (SPE) procedure and the analytical method for dissolved BMAA in water were investigated and optimized. The results showed both derivatized and underivatized methods were qualified for the measurement of BMAA and its isomer in natural water, and the limit of detection and the precision of the two methods were comparable. Cartridge characteristics and SPE conditions could greatly affect the SPE performance, and the competition of natural organic matter is the primary factor causing the low recovery of BMAA, which was reduced from approximately 90% in pure water to 38.11% in natural water. The optimized SPE method for BMAA was a combination of rinsed SPE cartridges, controlled loading/elution rates and elution solution, evaporation at 55 °C, reconstitution of a solution mixture, and filtration by polyvinylidene fluoride membrane. This optimized method achieved > 88% recovery of BMAA in both algal solution and river water. The developed method can provide an efficient way to evaluate the actual concentration levels of BMAA in actual water environments and drinking water systems.
Luebker, Stephen A; Wojtkiewicz, Melinda; Koepsell, Scott A
2015-11-01
Formalin-fixed paraffin-embedded (FFPE) tissue is a rich source of clinically relevant material that can yield important translational biomarker discovery using proteomic analysis. Protocols for analyzing FFPE tissue by LC-MS/MS exist, but standardization of procedures and critical analysis of data quality is limited. This study compared and characterized data obtained from FFPE tissue using two methods: a urea in-solution digestion method (UISD) versus a commercially available Qproteome FFPE Tissue Kit method (Qkit). Each method was performed independently three times on serial sections of homogenous FFPE tissue to minimize pre-analytical variations and analyzed with three technical replicates by LC-MS/MS. Data were evaluated for reproducibility and physiochemical distribution, which highlighted differences in the ability of each method to identify proteins of different molecular weights and isoelectric points. Each method replicate resulted in a significant number of new protein identifications, and both methods identified significantly more proteins using three technical replicates as compared to only two. UISD was cheaper, required less time, and introduced significant protein modifications as compared to the Qkit method, which provided more precise and higher protein yields. These data highlight significant variability among method replicates and type of method used, despite minimizing pre-analytical variability. Utilization of only one method or too few replicates (both method and technical) may limit the subset of proteomic information obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Visualizing dispersive features in 2D image via minimum gradient method
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yu; Wang, Yan; Shen, Zhi -Xun
Here, we developed a minimum gradient based method to track ridge features in a 2D image plot, which is a typical data representation in many momentum resolved spectroscopy experiments. Through both analytic formulation and numerical simulation, we compare this new method with existing DC (distribution curve) based and higher order derivative based analyses. We find that the new method has good noise resilience and enhanced contrast especially for weak intensity features and meanwhile preserves the quantitative local maxima information from the raw image. An algorithm is proposed to extract 1D ridge dispersion from the 2D image plot, whose quantitative applicationmore » to angle-resolved photoemission spectroscopy measurements on high temperature superconductors is demonstrated.« less
Visualizing dispersive features in 2D image via minimum gradient method
He, Yu; Wang, Yan; Shen, Zhi -Xun
2017-07-24
Here, we developed a minimum gradient based method to track ridge features in a 2D image plot, which is a typical data representation in many momentum resolved spectroscopy experiments. Through both analytic formulation and numerical simulation, we compare this new method with existing DC (distribution curve) based and higher order derivative based analyses. We find that the new method has good noise resilience and enhanced contrast especially for weak intensity features and meanwhile preserves the quantitative local maxima information from the raw image. An algorithm is proposed to extract 1D ridge dispersion from the 2D image plot, whose quantitative applicationmore » to angle-resolved photoemission spectroscopy measurements on high temperature superconductors is demonstrated.« less
Data-Driven Astrochemistry: One Step Further within the Origin of Life Puzzle.
Ruf, Alexander; d'Hendecourt, Louis L S; Schmitt-Kopplin, Philippe
2018-06-01
Astrochemistry, meteoritics and chemical analytics represent a manifold scientific field, including various disciplines. In this review, clarifications on astrochemistry, comet chemistry, laboratory astrophysics and meteoritic research with respect to organic and metalorganic chemistry will be given. The seemingly large number of observed astrochemical molecules necessarily requires explanations on molecular complexity and chemical evolution, which will be discussed. Special emphasis should be placed on data-driven analytical methods including ultrahigh-resolving instruments and their interplay with quantum chemical computations. These methods enable remarkable insights into the complex chemical spaces that exist in meteorites and maximize the level of information on the huge astrochemical molecular diversity. In addition, they allow one to study even yet undescribed chemistry as the one involving organomagnesium compounds in meteorites. Both targeted and non-targeted analytical strategies will be explained and may touch upon epistemological problems. In addition, implications of (metal)organic matter toward prebiotic chemistry leading to the emergence of life will be discussed. The precise description of astrochemical organic and metalorganic matter as seeds for life and their interactions within various astrophysical environments may appear essential to further study questions regarding the emergence of life on a most fundamental level that is within the molecular world and its self-organization properties.
NASA Astrophysics Data System (ADS)
Larabi, Mohamed Aziz; Mutschler, Dimitri; Mojtabi, Abdelkader
2016-06-01
Our present work focuses on the coupling between thermal diffusion and convection in order to improve the thermal gravitational separation of mixture components. The separation phenomenon was studied in a porous medium contained in vertical columns. We performed analytical and numerical simulations to corroborate the experimental measurements of the thermal diffusion coefficients of ternary mixture n-dodecane, isobutylbenzene, and tetralin obtained in microgravity in the international space station. Our approach corroborates the existing data published in the literature. The authors show that it is possible to quantify and to optimize the species separation for ternary mixtures. The authors checked, for ternary mixtures, the validity of the "forgotten effect hypothesis" established for binary mixtures by Furry, Jones, and Onsager. Two complete and different analytical resolution methods were used in order to describe the separation in terms of Lewis numbers, the separation ratios, the cross-diffusion coefficients, and the Rayleigh number. The analytical model is based on the parallel flow approximation. In order to validate this model, a numerical simulation was performed using the finite element method. From our new approach to vertical separation columns, new relations for mass fraction gradients and the optimal Rayleigh number for each component of the ternary mixture were obtained.
A systematic investigation of sample diluents in modern supercritical fluid chromatography.
Desfontaine, Vincent; Tarafder, Abhijit; Hill, Jason; Fairchild, Jacob; Grand-Guillaume Perrenoud, Alexandre; Veuthey, Jean-Luc; Guillarme, Davy
2017-08-18
This paper focuses on the possibility to inject large volumes (up to 10μL) in ultra-high performance supercritical fluid chromatography (UHPSFC) under generic gradient conditions. Several injection and method parameters have been individually evaluated (i.e. analyte concentration, injection volume, initial percentage of co-solvent in the gradient, nature of the weak needle wash solvent, nature of the sample diluent, nature of the column and of the analyte). The most critical parameters were further investigated using in a multivariate approach. The overall results suggested that several aprotic solvents including methyl tert-butyl ether (MTBE), dichloromethane, acetonitrile or cyclopentyl methyl ether (CPME) were well adapted for the injection of large volume in UHPSFC, while MeOH was generally the worst alternative. However, the nature of the stationary phase also had a strong impact and some of these diluents did not perform equally on each column. This was due to the existence of a competition in the adsorption of the analyte and the diluent on the stationary phase. This observation introduced the idea that the sample diluent should not only be chosen according to the analyte but also to the column chemistry to limit the interactions between the diluent and the ligands. Other important characteristics of the "ideal" SFC sample diluent were finally highlighted. Aprotic solvents with low viscosity are preferable to avoid strong solvent effects and viscous fingering, respectively. In the end, the authors suggest that the choice of the sample diluent should be part of the method development, as a function of the analyte and the selected stationary phase. Copyright © 2017 Elsevier B.V. All rights reserved.
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
SAM Radiochemical Methods Query
Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.
Zastepa, Arthur; Pick, Frances R; Blais, Jules M; Saleem, Ammar
2015-05-04
The fate and persistence of microcystin cyanotoxins in aquatic ecosystems remains poorly understood in part due to the lack of analytical methods for microcystins in sediments. Existing methods have been limited to the extraction of a few extracellular microcystins of similar chemistry. We developed a single analytical method, consisting of accelerated solvent extraction, hydrophilic-lipophilic balance solid phase extraction, and reversed phase high performance liquid chromatography-tandem mass spectrometry, suitable for the extraction and quantitation of both intracellular and extracellular cyanotoxins in sediments as well as pore waters. Recoveries of nine microcystins, representing the chemical diversity of microcystins, and nodularin (a marine analogue) ranged between 75 and 98% with one, microcystin-RR (MC-RR), at 50%. Chromatographic separation of these analytes was achieved within 7.5 min and the method detection limits were between 1.1 and 2.5 ng g(-1) dry weight (dw). The robustness of the method was demonstrated on sediment cores collected from seven Canadian lakes of diverse geography and trophic states. Individual microcystin variants reached a maximum concentration of 829 ng g(-1) dw on sediment particles and 132 ng mL(-1) in pore waters and could be detected in sediments as deep as 41 cm (>100 years in age). MC-LR, -RR, and -LA were more often detected while MC-YR, -LY, -LF, and -LW were less common. The analytical method enabled us to estimate sediment-pore water distribution coefficients (K(d)), MC-RR had the highest affinity for sediment particles (log K(d)=1.3) while MC-LA had the lowest affinity (log K(d)=-0.4), partitioning mainly into pore waters. Our findings confirm that sediments serve as a reservoir for microcystins but suggest that some variants may diffuse into overlying water thereby constituting a new route of exposure following the dissipation of toxic blooms. The method is well suited to determine the fate and persistence of different microcystins in aquatic systems. Copyright © 2015 Elsevier B.V. All rights reserved.
Student Writing Accepted as High-Quality Responses to Analytic Text-Based Writing Tasks
ERIC Educational Resources Information Center
Wang, Elaine; Matsumura, Lindsay Clare; Correnti, Richard
2018-01-01
Literacy standards increasingly emphasize the importance of analytic text-based writing. Little consensus exists, however, around what high-quality student responses should look like in this genre. In this study, we investigated fifth-grade students' writing in response to analytic text-based writing tasks (15 teachers, 44 writing tasks, 88 pieces…
University of Missouri-St. Louis: Data-Driven Online Course Design and Effective Practices
ERIC Educational Resources Information Center
Grant, Mary Rose
2012-01-01
Analytics has a significant place in the future of higher education by guiding reform and system change. As this case study has shown, analytics can do more than evaluate what students have done and predict what they will do. Learning analytics can be transformative, altering existing pedagogical processes, research, data management, and…
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
Kleeberg, K K; Liu, Y; Jans, M; Schlegelmilch, M; Streese, J; Stegmann, R
2005-01-01
A solid-phase microextraction (SPME) method has been developed for the extraction of odorous compounds from waste gas. The enriched compounds were characterized by gas chromatography-mass spectrometry (GC-MS) and gas chromatography followed by simultaneous flame ionization detection and olfactometry (GC-FID/O). Five different SPME fiber coatings were tested, and the carboxen/polydimethylsiloxane (CAR/PDMS) fiber showed the highest ability to extract odorous compounds from the waste gas. Furthermore, parameters such as exposure time, desorption temperature, and desorption time have been optimized. The SPME method was successfully used to characterize an odorous waste gas from a fat refinery prior to and after waste gas treatment in order to describe the treatment efficiency of the used laboratory scale plant which consisted of a bioscrubber/biofilter combination and an activated carbon adsorber. The developed method is a valuable approach to provide detailed information of waste gas composition and complements existing methods for the determination of odors. However, caution should be exercised if CAR/PDMS fibers are used for the quantification of odorous compounds in multi-component matrices like waste gas emissions since the relative affinity of each analyte was shown to differ according to the total amount of analytes present in the sample.
Prabhu, Gurpur Rakesh D; Witek, Henryk A; Urban, Pawel L
2018-05-31
Most analytical methods are based on "analogue" inputs from sensors of light, electric potentials, or currents. The signals obtained by such sensors are processed using certain calibration functions to determine concentrations of the target analytes. The signal readouts are normally done after an optimised and fixed time period, during which an assay mixture is incubated. This minireview covers another-and somewhat unusual-analytical strategy, which relies on the measurement of time interval between the occurrences of two distinguishable states in the assay reaction. These states manifest themselves via abrupt changes in the properties of the assay mixture (e.g. change of colour, appearance or disappearance of luminescence, change in pH, variations in optical activity or mechanical properties). In some cases, a correlation between the time of appearance/disappearance of a given property and the analyte concentration can be also observed. An example of an assay based on time measurement is an oscillating reaction, in which the period of oscillations is linked to the concentration of the target analyte. A number of chemo-chronometric assays, relying on the existing (bio)transformations or artificially designed reactions, were disclosed in the past few years. They are very attractive from the fundamental point of view but-so far-only few of them have be validated and used to address real-world problems. Then, can chemo-chronometric assays become a practical tool for chemical analysis? Is there a need for further development of such assays? We are aiming to answer these questions.
Single-analyte to multianalyte fluorescence sensors
NASA Astrophysics Data System (ADS)
Lavigne, John J.; Metzger, Axel; Niikura, Kenichi; Cabell, Larry A.; Savoy, Steven M.; Yoo, J. S.; McDevitt, John T.; Neikirk, Dean P.; Shear, Jason B.; Anslyn, Eric V.
1999-05-01
The rational design of small molecules for the selective complexation of analytes has reached a level of sophistication such that there exists a high degree of prediction. An effective strategy for transforming these hosts into sensors involves covalently attaching a fluorophore to the receptor which displays some fluorescence modulation when analyte is bound. Competition methods, such as those used with antibodies, are also amenable to these synthetic receptors, yet there are few examples. In our laboratories, the use of common dyes in competition assays with small molecules has proven very effective. For example, an assay for citrate in beverages and an assay for the secondary messenger IP3 in cells have been developed. Another approach we have explored focuses on multi-analyte sensor arrays with attempt to mimic the mammalian sense of taste. Our system utilizes polymer resin beads with the desired sensors covalently attached. These functionalized microspheres are then immobilized into micromachined wells on a silicon chip thereby creating our taste buds. Exposure of the resin to analyte causes a change in the transmittance of the bead. This change can be fluorescent or colorimetric. Optical interrogation of the microspheres, by illuminating from one side of the wafer and collecting the signal on the other, results in an image. These data streams are collected using a CCD camera which creates red, green and blue (RGB) patterns that are distinct and reproducible for their environments. Analysis of this data can identify and quantify the analytes present.
Detection of molecular particles in live cells via machine learning.
Jiang, Shan; Zhou, Xiaobo; Kirchhausen, Tom; Wong, Stephen T C
2007-08-01
Clathrin-coated pits play an important role in removing proteins and lipids from the plasma membrane and transporting them to the endosomal compartment. It is, however, still unclear whether there exist "hot spots" for the formation of Clathrin-coated pits or the pits and arrays formed randomly on the plasma membrane. To answer this question, first of all, many hundreds of individual pits need to be detected accurately and separated in live-cell microscope movies to capture and monitor how pits and vesicles were formed. Because of the noisy background and the low contrast of the live-cell movies, the existing image analysis methods, such as single threshold, edge detection, and morphological operation, cannot be used. Thus, this paper proposes a machine learning method, which is based on Haar features, to detect the particle's position. Results show that this method can successfully detect most of particles in the image. In order to get the accurate boundaries of these particles, several post-processing methods are applied and signal-to-noise ratio analysis is also performed to rule out the weak spots. Copyright 2007 International Society for Analytical Cytology.
Summary of AH-1G flight vibration data for validation of coupled rotor-fuselage analyses
NASA Technical Reports Server (NTRS)
Dompka, R. V.; Cronkhite, J. D.
1986-01-01
Under a NASA research program designated DAMVIBS (Design Analysis Methods for VIBrationS), four U. S. helicopter industry participants (Bell Helicopter, Boeing Vertol, McDonnell Douglas Helicopter, and Sikorsky Aircraft) are to apply existing analytical methods for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. Bell Helicopter, as the manufacturer of the AH-1G, was asked to provide pertinent rotor data and to collect the OLS flight vibration data needed to perform the correlations. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM) developed by Bell which has been extensively documented and correlated with ground vibration tests.The AH-1G FEM was provided to each of the participants for use in their coupled rotor-fuselage analyses. This report describes the AH-1G OLS flight test program and provides the flight conditions and measured vibration data to be used by each participant in their correlation effort. In addition, the mechanical, structural, inertial and aerodynamic data for the AH-1G two-bladed teetering main rotor system are presented. Furthermore, modifications to the NASTRAN FEM of the fuselage structure that are necessary to make it compatible with the OLS test article are described. The AH-1G OLS flight test data was found to be well documented and provide a sound basis for evaluating currently existing analysis methods used for calculation of coupled rotor-fuselage vibrations.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...
Marshall, Deborah A; Douglas, Patrick R; Drummond, Michael F; Torrance, George W; Macleod, Stuart; Manti, Orlando; Cheruvu, Lokanadha; Corvari, Ron
2008-01-01
Until now, there has been no standardized method of performing and presenting budget impact analyses (BIAs) in Canada. Nevertheless, most drug plan managers have been requiring this economic data to inform drug reimbursement decisions. This paper describes the process used to develop the Canadian BIA Guidelines; describes the Guidelines themselves, including the model template; and compares this guidance with other guidance on BIAs. The intended audience includes those who develop, submit or use BIA models, and drug plan managers who evaluate BIA submissions. The Patented Medicine Prices Review Board (PMPRB) initiated the development of the Canadian BIA Guidelines on behalf of the National Prescription Drug Utilisation Information System (NPDUIS). The findings and recommendations from a needs assessment with respect to BIA submissions were reviewed to inform guideline development. In addition, a literature review was performed to identify existing BIA guidance. The detailed guidance was developed on this basis, and with the input of the NPDUIS Advisory Committee, including drug plan managers from multiple provinces in Canada and a representative from the Canadian Agency for Drugs and Technologies in Health. A Microsoft Excel-based interactive model template was designed to support BIA model development. Input regarding the guidelines and model template was sought from each NPDUIS Advisory Committee member to ensure compatibility with existing drug plan needs. Decisions were made by consensus through multiple rounds of review and discussion. Finally, BIA guidance in Canadian provinces and other countries were compared on the basis of multiple criteria. The BIA guidelines consist of three major sections: Analytic Framework, Inputs and Data Sources, and Reporting Format. The Analytic Framework section contains a discussion of nine general issues surrounding BIAs (model design, analytic perspective, time horizon, target population, costing, scenarios to be compared, the characterisation of uncertainty, discounting, and validation methods). The Inputs and Data Sources section addresses methods for market size estimation, comparator selection, scenario forecasting and drug price estimation. The Reporting Format section describes methods for BIA reporting. The new Canadian BIA Guidelines represent a significant departure from the limited guidance that was previously available from some of the provinces, because they include specific details of the methods of performing BIAs. The Canadian BIA Guidelines differ from the Principles of Good Research Practice for BIAs developed by the International Society for Pharmacoeconomic and Outcomes Research (ISPOR), which provide more general guidance. The Canadian BIA Guidelines and template build upon existing guidance to address the specific requirements of each of the participating drug plans in Canada. Both have been endorsed by the NPDUIS Steering Committee and the PMPRB for the standardization of BIA submissions.
An Overview of Learning Analytics
ERIC Educational Resources Information Center
Zilvinskis, John; Willis, James, III; Borden, Victor M. H.
2017-01-01
The purpose of this chapter is to provide administrators and faculty with an understanding of learning analytics and its relationship to existing roles and functions so better institutional decisions can be made about investments and activities related to these technologies.
Ferranti, Jeffrey M; Langman, Matthew K; Tanaka, David; McCall, Jonathan; Ahmad, Asif
2010-01-01
Healthcare is increasingly dependent upon information technology (IT), but the accumulation of data has outpaced our capacity to use it to improve operating efficiency, clinical quality, and financial effectiveness. Moreover, hospitals have lagged in adopting thoughtful analytic approaches that would allow operational leaders and providers to capitalize upon existing data stores. In this manuscript, we propose a fundamental re-evaluation of strategic IT investments in healthcare, with the goal of increasing efficiency, reducing costs, and improving outcomes through the targeted application of health analytics. We also present three case studies that illustrate the use of health analytics to leverage pre-existing data resources to support improvements in patient safety and quality of care, to increase the accuracy of billing and collection, and support emerging health issues. We believe that such active investment in health analytics will prove essential to realizing the full promise of investments in electronic clinical systems.
Langman, Matthew K; Tanaka, David; McCall, Jonathan; Ahmad, Asif
2010-01-01
Healthcare is increasingly dependent upon information technology (IT), but the accumulation of data has outpaced our capacity to use it to improve operating efficiency, clinical quality, and financial effectiveness. Moreover, hospitals have lagged in adopting thoughtful analytic approaches that would allow operational leaders and providers to capitalize upon existing data stores. In this manuscript, we propose a fundamental re-evaluation of strategic IT investments in healthcare, with the goal of increasing efficiency, reducing costs, and improving outcomes through the targeted application of health analytics. We also present three case studies that illustrate the use of health analytics to leverage pre-existing data resources to support improvements in patient safety and quality of care, to increase the accuracy of billing and collection, and support emerging health issues. We believe that such active investment in health analytics will prove essential to realizing the full promise of investments in electronic clinical systems. PMID:20190055
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
Xiang, Dongshan; Li, Fengquan; Wu, Chenyi; Shi, Boan; Zhai, Kun
2017-11-01
We designed two double quenching molecular beacons (MBs) with simple structure based on guanine (G base) and Black Hole Quencher (BHQ), and developed a new analytical method for sensitive simultaneous detection of two DNAs by synchronous fluorescence analysis. In this analytical method, carboxyl fluorescein (FAM) and tetramethyl-6-carboxyrhodamine (TAMRA) were respectively selected as fluorophore of two MBs, Black Hole Quencher 1 (BHQ-1) and Black Hole Quencher 2 (BHQ-2) were respectively selected as organic quencher, and three continuous nucleotides with G base were connected to organic quencher (BHQ-1 and BHQ-2). In the presence of target DNAs, the two MBs hybridize with the corresponding target DNAs, the fluorophores are separated from organic quenchers and G bases, leading to recovery of fluorescence of FAM and TAMRA. Under a certain conditions, the fluorescence intensities of FAM and TAMRA all exhibited good linear dependence on their concentration of target DNAs (T1 and T2) in the range from 4 × 10 -10 to 4 × 10 -8 molL -1 (M). The detection limit (3σ, n = 13) of T1 was 3 × 10 -10 M and that of T2 was 2×10 -10 M, respectively. Compared with the existing analysis methods for multiplex DNA with MBs, this proposed method based on double quenching MBs is not only low fluorescence background, short analytical time and low detection cost, but also easy synthesis and good stability of MB probes. Copyright © 2017 Elsevier B.V. All rights reserved.
Collective Phase in Resource Competition in a Highly Diverse Ecosystem.
Tikhonov, Mikhail; Monasson, Remi
2017-01-27
Organisms shape their own environment, which in turn affects their survival. This feedback becomes especially important for communities containing a large number of species; however, few existing approaches allow studying this regime, except in simulations. Here, we use methods of statistical physics to analytically solve a classic ecological model of resource competition introduced by MacArthur in 1969. We show that the nonintuitive phenomenology of highly diverse ecosystems includes a phase where the environment constructed by the community becomes fully decoupled from the outside world.
Thermal acoustic oscillations, volume 2. [cryogenic fluid storage
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Sims, W. H.; Fan, C.
1975-01-01
A number of thermal acoustic oscillation phenomena and their effects on cryogenic systems were studied. The conditions which cause or suppress oscillations, the frequency, amplitude and intensity of oscillations when they exist, and the heat loss they induce are discussed. Methods of numerical analysis utilizing the digital computer were developed for use in cryogenic systems design. In addition, an experimental verification program was conducted to study oscillation wave characteristics and boiloff rate. The data were then reduced and compared with the analytical predictions.
Integrable Time-Dependent Quantum Hamiltonians
NASA Astrophysics Data System (ADS)
Sinitsyn, Nikolai A.; Yuzbashyan, Emil A.; Chernyak, Vladimir Y.; Patra, Aniket; Sun, Chen
2018-05-01
We formulate a set of conditions under which the nonstationary Schrödinger equation with a time-dependent Hamiltonian is exactly solvable analytically. The main requirement is the existence of a non-Abelian gauge field with zero curvature in the space of system parameters. Known solvable multistate Landau-Zener models satisfy these conditions. Our method provides a strategy to incorporate time dependence into various quantum integrable models while maintaining their integrability. We also validate some prior conjectures, including the solution of the driven generalized Tavis-Cummings model.
Study on application of aerospace technology to improve surgical implants
NASA Technical Reports Server (NTRS)
Johnson, R. E.; Youngblood, J. L.
1982-01-01
The areas where aerospace technology could be used to improve the reliability and performance of metallic, orthopedic implants was assessed. Specifically, comparisons were made of material controls, design approaches, analytical methods and inspection approaches being used in the implant industry with hardware for the aerospace industries. Several areas for possible improvement were noted such as increased use of finite element stress analysis and fracture control programs on devices where the needs exist for maximum reliability and high structural performance.
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...
NASA Astrophysics Data System (ADS)
Lovrić, Milivoj
Electrochemical stripping means the oxidative or reductive removal of atoms, ions, or compounds from an electrode surface (or from the electrode body, as in the case of liquid mercury electrodes with dissolved metals) [1-5]. In general, these atoms, ions, or compounds have been preliminarily immobilized on the surface of an inert electrode (or within it) as the result of a preconcentration step, while the products of the electrochemical stripping will dissolve in the electrolytic solution. Often the product of the electrochemical stripping is identical to the analyte before the preconcentration. However, there are exemptions to these rules. Electroanalytical stripping methods comprise two steps: first, the accumulation of a dissolved analyte onto, or in, the working electrode, and, second, the subsequent stripping of the accumulated substance by a voltammetric [3, 5], potentiometric [6, 7], or coulometric [8] technique. In stripping voltammetry, the condition is that there are two independent linear relationships: the first one between the activity of accumulated substance and the concentration of analyte in the sample, and the second between the maximum stripping current and the accumulated substance activity. Hence, a cumulative linear relationship between the maximum response and the analyte concentration exists. However, the electrode capacity for the analyte accumulation is limited and the condition of linearity is satisfied only well below the electrode saturation. For this reason, stripping voltammetry is used mainly in trace analysis. The limit of detection depends on the factor of proportionality between the activity of the accumulated substance and the bulk concentration of the analyte. This factor is a constant in the case of a chemical accumulation, but for electrochemical accumulation it depends on the electrode potential. The factor of proportionality between the maximum stripping current and the analyte concentration is rarely known exactly. In fact, it is frequently ignored. For the analysis it suffices to establish the linear relationship empirically. The slope of this relationship may vary from one sample to another because of different influences of the matrix. In this case the concentration of the analyte is determined by the method of standard additions [1]. After measuring the response of the sample, the concentration of the analyte is deliberately increased by adding a certain volume of its standard solution. The response is measured again, and this procedure is repeated three or four times. The unknown concentration is determined by extrapolation of the regression line to the concentration axis [9]. However, in many analytical methods, the final measurement is performed in a standard matrix that allows the construction of a calibration plot. Still, the slope of this plot depends on the active area of the working electrode surface. Each solid electrode needs a separate calibration plot, and that plot must be checked from time to time because of possible deterioration of the electrode surface [2].
ERIC Educational Resources Information Center
Khan, Osama
2017-01-01
This paper depicts a perceptual picture of learning analytics based on the understanding of learners and teachers at the SSU as a case study. The existing literature covers technical challenges of learning analytics (LA) and how it creates better social construct for enhanced learning support, however, there has not been adequate research on…
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
Nanoparticle interactions with co-existing contaminants: joint toxicity, bioaccumulation and risk.
Deng, Rui; Lin, Daohui; Zhu, Lizhong; Majumdar, Sanghamitra; White, Jason C; Gardea-Torresdey, Jorge L; Xing, Baoshan
2017-06-01
With their growing production and application, engineered nanoparticles (NPs) are increasingly discharged into the environment. The released NPs can potentially interact with pre-existing contaminants, leading to biological effects (bioaccumulation and/or toxicity) that are poorly understood. Most studies on NPs focus on single analyte exposure; the existing literature on joint toxicity of NPs and co-existing contaminants is rather limited but beginning to develop rapidly. This is the first review paper evaluating the current state of knowledge regarding the joint effects of NPs and co-contaminants. Here, we review: (1) methods for investigating and evaluating joint effects of NPs and co-contaminants; (2) simultaneous toxicities from NPs co-exposed with organic contaminants, metal/metalloid ions, dissolved organic matter (DOM), inorganic ligands and additional NPs; and (3) the influence of NPs co-exposure on the bioaccumulation of organic contaminants and heavy metal ions, as well as the influence of contaminants on NPs bioaccumulation. In addition, future research needs are discussed so as to better understand risk associated with NPs-contaminant co-exposure.
Dimier, Natalie; Todd, Susan
2017-09-01
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter-term surrogate endpoints as substitutes for costly long-term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long-term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time-to-event surrogates for a time-to-event true endpoint. This two-stage meta-analytic copula method has been extensively studied for time-to-event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two-stage meta-analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use. Copyright © 2017 John Wiley & Sons, Ltd.
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.
Ly, Cheng
2015-12-01
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.
NASA Astrophysics Data System (ADS)
Chien, Chih-Chun; Kouachi, Said; Velizhanin, Kirill A.; Dubi, Yonatan; Zwolak, Michael
2017-01-01
We present a method for calculating analytically the thermal conductance of a classical harmonic lattice with both alternating masses and nearest-neighbor couplings when placed between individual Langevin reservoirs at different temperatures. The method utilizes recent advances in analytic diagonalization techniques for certain classes of tridiagonal matrices. It recovers the results from a previous method that was applicable for alternating on-site parameters only, and extends the applicability to realistic systems in which masses and couplings alternate simultaneously. With this analytic result in hand, we show that the thermal conductance is highly sensitive to the modulation of the couplings. This is due to the existence of topologically induced edge modes at the lattice-reservoir interface and is also a reflection of the symmetries of the lattice. We make a connection to a recent work that demonstrates thermal transport is analogous to chemical reaction rates in solution given by Kramers' theory [Velizhanin et al., Sci. Rep. 5, 17506 (2015)], 10.1038/srep17506. In particular, we show that the turnover behavior in the presence of edge modes prevents calculations based on single-site reservoirs from coming close to the natural—or intrinsic—conductance of the lattice. Obtaining the correct value of the intrinsic conductance through simulation of even a small lattice where ballistic effects are important requires quite large extended reservoir regions. Our results thus offer a route for both the design and proper simulation of thermal conductance of nanoscale devices.
NASA Astrophysics Data System (ADS)
Pai, David Z.; Lacoste, Deanna A.; Laux, Christophe O.
2010-05-01
In atmospheric pressure air preheated from 300 to 1000 K, the nanosecond repetitively pulsed (NRP) method has been used to generate corona, glow, and spark discharges. Experiments have been performed to determine the parameter space (applied voltage, pulse repetition frequency, ambient gas temperature, and interelectrode gap distance) of each discharge regime. In particular, the experimental conditions necessary for the glow regime of NRP discharges have been determined, with the notable result that there exists a minimum and maximum gap distance for its existence at a given ambient gas temperature. The minimum gap distance increases with decreasing gas temperature, whereas the maximum does not vary appreciably. To explain the experimental results, an analytical model is developed to explain the corona-to-glow (C-G) and glow-to-spark (G-S) transitions. The C-G transition is analyzed in terms of the avalanche-to-streamer transition and the breakdown field during the conduction phase following the establishment of a conducting channel across the discharge gap. The G-S transition is determined by the thermal ionization instability, and we show analytically that this transition occurs at a certain reduced electric field for the NRP discharges studied here. This model shows that the electrode geometry plays an important role in the existence of the NRP glow regime at a given gas temperature. We derive a criterion for the existence of the NRP glow regime as a function of the ambient gas temperature, pulse repetition frequency, electrode radius of curvature, and interelectrode gap distance.
Multigrid methods for a semilinear PDE in the theory of pseudoplastic fluids
NASA Technical Reports Server (NTRS)
Henson, Van Emden; Shaker, A. W.
1993-01-01
We show that by certain transformations the boundary layer equations for the class of non-Newtonian fluids named pseudoplastic can be generalized in the form the vector differential operator(u) + p(x)u(exp -lambda) = 0, where x is a member of the set Omega and Omega is a subset of R(exp n), n is greater than or equal to 1 under the classical conditions for steady flow over a semi-infinite flat plate. We provide a survey of the existence, uniqueness, and analyticity of the solutions for this problem. We also establish numerical solutions in one- and two-dimensional regions using multigrid methods.
NASA Astrophysics Data System (ADS)
Kyrylova, O. I.; Popov, V. G.
2018-04-01
An effective analytical-numerical method for determining the dynamic stresses in a hollow cylindrical body of arbitrary cross-section with a tunnel crack under antiplane strain conditions is proposed. The method allows separately solving the integral equations on the crack faces and satisfying the boundary conditions on the body boundaries. It provides a convenient numerical scheme. Approximate formulas for calculating the dynamic stress intensity factors in a neighborhood of the crack are obtained and the influence of the crack geometry and wave number on these quantities is investigated, especially from the point of view of the resonance existence.
Modelling migration in multilayer systems by a finite difference method: the spherical symmetry case
NASA Astrophysics Data System (ADS)
Hojbotǎ, C. I.; Toşa, V.; Mercea, P. V.
2013-08-01
We present a numerical model based on finite differences to solve the problem of chemical impurity migration within a multilayer spherical system. Migration here means diffusion of chemical species in conditions of concentration partitioning at layer interfaces due to different solubilities of the migrant in different layers. We detail here the numerical model and discuss the results of its implementation. To validate the method we compare it with cases where an analytic solution exists. We also present an application of our model to a practical problem in which we compute the migration of caprolactam from the packaging multilayer foil into the food.
Measuring food intake in studies of obesity.
Lissner, Lauren
2002-12-01
The problem of how to measure habitual food intake in studies of obesity remains an enigma in nutritional research. The existence of obesity-specific underreporting was rather controversial until the advent of the doubly labelled water technique gave credence to previously anecdotal evidence that such a bias does in fact exist. This paper reviews a number of issues relevant to interpreting dietary data in studies involving obesity. Topics covered include: participation biases, normative biases,importance of matching method to study, selective underreporting, and a brief discussion of the potential implications of generalised and selective underreporting in analytical epidemiology. It is concluded that selective underreporting of certain food types by obese individuals would produce consequences in analytical epidemiological studies that are both unpredictable and complex. Since it is becoming increasingly acknowledged that selective reporting error does occur, it is important to emphasise that correction for energy intake is not sufficient to eliminate the biases from this type of error. This is true both for obesity-related selective reporting errors and more universal types of selective underreporting, e.g. foods of low social desirability. Additional research is urgently required to examine the consequences of this type of error.
MemAxes: Visualization and Analytics for Characterizing Complex Memory Performance Behaviors.
Gimenez, Alfredo; Gamblin, Todd; Jusufi, Ilir; Bhatele, Abhinav; Schulz, Martin; Bremer, Peer-Timo; Hamann, Bernd
2018-07-01
Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.
Polynomial solutions of the Monge-Ampère equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aminov, Yu A
2014-11-30
The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
McAdoo, Mitchell A.; Kozar, Mark D.
2017-11-14
This report describes a compilation of existing water-quality data associated with groundwater resources originating from abandoned underground coal mines in West Virginia. Data were compiled from multiple sources for the purpose of understanding the suitability of groundwater from abandoned underground coal mines for public supply, industrial, agricultural, and other uses. This compilation includes data collected for multiple individual studies conducted from July 13, 1973 through September 7, 2016. Analytical methods varied by the time period of data collection and requirements of the independent studies.This project identified 770 water-quality samples from 294 sites that could be attributed to abandoned underground coal mine aquifers originating from multiple coal seams in West Virginia.
Stability of phases of a square-well fluid within superposition approximation
NASA Astrophysics Data System (ADS)
Piasecki, Jarosław; Szymczak, Piotr; Kozak, John J.
2013-04-01
The analytic and numerical methods introduced previously to study the phase behavior of hard sphere fluids starting from the Yvon-Born-Green (YBG) equation under the Kirkwood superposition approximation (KSA) are adapted to the square-well fluid. We are able to show conclusively that the YBG equation under the KSA closure when applied to the square-well fluid: (i) predicts the existence of an absolute stability limit corresponding to freezing where undamped oscillations appear in the long-distance behavior of correlations, (ii) in accordance with earlier studies reveals the existence of a liquid-vapor transition by the appearance of a "near-critical region" where monotonically decaying correlations acquire very long range, although the system never loses stability.
Free iterative-complement-interaction calculations of the hydrogen molecule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurokawa, Yusaku; Nakashima, Hiroyuki; Nakatsuji, Hiroshi
2005-12-15
The free iterative-complement-interaction (ICI) method based on the scaled Schroedinger equation proposed previously has been applied to the calculations of very accurate wave functions of the hydrogen molecule in an analytical expansion form. All the variables were determined with the variational principle by calculating the necessary integrals analytically. The initial wave function and the scaling function were changes to see the effects on the convergence speed of the ICI calculations. The free ICI wave functions that were generated automatically were different from the existing wave functions, and this difference was shown to be physically important. The best wave function reportedmore » in this paper seems to be the best worldwide in the literature from the variational point of view. The quality of the wave function was examined by calculating the nuclear and electron cusps.« less
25 years of HBM in the Czech Republic.
Černá, Milena; Puklová, Vladimíra; Hanzlíková, Lenka; Sochorová, Lenka; Kubínová, Růžena
2017-03-01
Since 1991 a human biomonitoring network has been established in the Czech Republic as part of the Environmental Health Monitoring System, which was set out by the Government Resolution. During the last quarter-century, important data was obtained to characterize exposure to both children and adult populations to significant toxic substances from the environment, to development trends over time, to establish reference values and compare them with existing health-related values. Moreover, the saturation of population with several essential substances as selenium, zinc, copper or iodine has also been monitored. Development of analytical and statistical methods led to increase the capacity building, improvement of QA/QC in analytical laboratories and interpretation of results. The obtained results are translated to policy actions and are used in health risk assessment processes at local and national levels. Copyright © 2016 Elsevier GmbH. All rights reserved.
Preanalytical requirements of urinalysis
Delanghe, Joris; Speeckaert, Marijn
2014-01-01
Urine may be a waste product, but it contains an enormous amount of information. Well-standardized procedures for collection, transport, sample preparation and analysis should become the basis of an effective diagnostic strategy for urinalysis. As reproducibility of urinalysis has been greatly improved due to recent technological progress, preanalytical requirements of urinalysis have gained importance and have become stricter. Since the patients themselves often sample urine specimens, urinalysis is very susceptible to preanalytical issues. Various sampling methods and inappropriate specimen transport can cause important preanalytical errors. The use of preservatives may be helpful for particular analytes. Unfortunately, a universal preservative that allows a complete urinalysis does not (yet) exist. The preanalytical aspects are also of major importance for newer applications (e.g. metabolomics). The present review deals with the current preanalytical problems and requirements for the most common urinary analytes. PMID:24627718
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
On Connectivity of Wireless Sensor Networks with Directional Antennas
Wang, Qiu; Dai, Hong-Ning; Zheng, Zibin; Imran, Muhammad; Vasilakos, Athanasios V.
2017-01-01
In this paper, we investigate the network connectivity of wireless sensor networks with directional antennas. In particular, we establish a general framework to analyze the network connectivity while considering various antenna models and the channel randomness. Since existing directional antenna models have their pros and cons in the accuracy of reflecting realistic antennas and the computational complexity, we propose a new analytical directional antenna model called the iris model to balance the accuracy against the complexity. We conduct extensive simulations to evaluate the analytical framework. Our results show that our proposed analytical model on the network connectivity is accurate, and our iris antenna model can provide a better approximation to realistic directional antennas than other existing antenna models. PMID:28085081
How Qualitative Methods Can be Used to Inform Model Development.
Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna
2017-06-01
Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.
An Artificial Neural Networks Method for Solving Partial Differential Equations
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2010-09-01
While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei
2013-02-01
Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.
NASA Astrophysics Data System (ADS)
Geng, Weihua; Zhao, Shan
2017-12-01
We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Luginbühl, Werner; Kremp, Anke; Suikkanen, Sanna; Kankaanpää, Harri; Burrell, Stephen; Söderström, Martin; Vanninen, Paula
2015-01-01
A saxitoxin (STX) proficiency test (PT) was organized as part of the Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk (EQuATox) project. The aim of this PT was to provide an evaluation of existing methods and the European laboratories’ capabilities for the analysis of STX and some of its analogues in real samples. Homogenized mussel material and algal cell materials containing paralytic shellfish poisoning (PSP) toxins were produced as reference sample matrices. The reference material was characterized using various analytical methods. Acidified algal extract samples at two concentration levels were prepared from a bulk culture of PSP toxins producing dinoflagellate Alexandrium ostenfeldii. The homogeneity and stability of the prepared PT samples were studied and found to be fit-for-purpose. Thereafter, eight STX PT samples were sent to ten participating laboratories from eight countries. The PT offered the participating laboratories the possibility to assess their performance regarding the qualitative and quantitative detection of PSP toxins. Various techniques such as official Association of Official Analytical Chemists (AOAC) methods, immunoassays, and liquid chromatography-mass spectrometry were used for sample analyses. PMID:26602927
NASA Astrophysics Data System (ADS)
Sævik, P. N.; Nixon, C. W.
2017-11-01
We demonstrate how topology-based measures of connectivity can be used to improve analytical estimates of effective permeability in 2-D fracture networks, which is one of the key parameters necessary for fluid flow simulations at the reservoir scale. Existing methods in this field usually compute fracture connectivity using the average fracture length. This approach is valid for ideally shaped, randomly distributed fractures, but is not immediately applicable to natural fracture networks. In particular, natural networks tend to be more connected than randomly positioned fractures of comparable lengths, since natural fractures often terminate in each other. The proposed topological connectivity measure is based on the number of intersections and fracture terminations per sampling area, which for statistically stationary networks can be obtained directly from limited outcrop exposures. To evaluate the method, numerical permeability upscaling was performed on a large number of synthetic and natural fracture networks, with varying topology and geometry. The proposed method was seen to provide much more reliable permeability estimates than the length-based approach, across a wide range of fracture patterns. We summarize our results in a single, explicit formula for the effective permeability.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
Not All Biofluids Are Created Equal: Chewing Over Salivary Diagnostics and the Epigenome
Wren, M.E.; Shirtcliff, E.A.; Drury, Stacy S.
2015-01-01
Purpose This article describes progress to date in the characterization of the salivary epigenome and considers the importance of previous work in the salivary microbiome, proteome, endocrine analytes, genome, and transcriptome. Methods PubMed and Web of Science were used to extensively search the existing literature (original research and reviews) related to salivary diagnostics and bio-marker development, of which 125 studies were examined. This article was derived from the most relevant 73 sources highlighting the recent state of the evolving field of salivary epigenomics and contributing significantly to the foundational work in saliva-based research. Findings Validation of any new saliva-based diagnostic or analyte will require comparison to previously accepted standards established in blood. Careful attention to the collection, processing, and analysis of salivary analytes is critical for the development and implementation of newer applications that include genomic, transcriptomic, and epigenomic markers. All these factors must be integrated into initial study design. Implications This commentary highlights the appeal of the salivary epigenome for translational applications and its utility in future studies of development and the interface among environment, disease, and health. PMID:25778408
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
An overview of selected NASP aeroelastic studies at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Spain, Charles V.; Soistmann, David L.; Parker, Ellen C.; Gibbons, Michael D.; Gilbert, Michael G.
1990-01-01
Following an initial discussion of the NASP flight environment, the results of recent aeroelastic testing of NASP-type highly swept delta-wing models in Langley's Transonic Dynamics Tunnel (TDT) are summarized. Subsonic and transonic flutter characteristics of a variety of these models are described, and several analytical codes used to predict flutter of these models are evaluated. These codes generally provide good, but conservative predictions of subsonic and transonic flutter. Also, test results are presented on a nonlinear transonic phenomena known as aileron buzz which occurred in the wind tunnel on highly swept delta wings with full-span ailerons. An analytical procedure which assesses the effects of hypersonic heating on aeroelastic instabilities (aerothermoelasticity) is also described. This procedure accurately predicted flutter of a heated aluminum wing on which experimental data exists. Results are presented on the application of this method to calculate the flutter characteristics of a fine-element model of a generic NASP configuration. Finally, it is demonstrated analytically that active controls can be employed to improve the aeroelastic stability and ride quality of a generic NASP vehicle flying at hypersonic speeds.
Analytically optimal parameters of dynamic vibration absorber with negative stiffness
NASA Astrophysics Data System (ADS)
Shen, Yongjun; Peng, Haibo; Li, Xianghong; Yang, Shaopu
2017-02-01
In this paper the optimal parameters of a dynamic vibration absorber (DVA) with negative stiffness is analytically studied. The analytical solution is obtained by Laplace transform method when the primary system is subjected to harmonic excitation. The research shows there are still two fixed points independent of the absorber damping in the amplitude-frequency curve of the primary system when the system contains negative stiffness. Then the optimum frequency ratio and optimum damping ratio are respectively obtained based on the fixed-point theory. A new strategy is proposed to obtain the optimum negative stiffness ratio and make the system remain stable at the same time. At last the control performance of the presented DVA is compared with those of three existing typical DVAs, which were presented by Den Hartog, Ren and Sims respectively. The comparison results in harmonic and random excitation show that the presented DVA in this paper could not only reduce the peak value of the amplitude-frequency curve of the primary system significantly, but also broaden the efficient frequency range of vibration mitigation.
Parra-Robles, J; Ajraoui, S; Deppe, M H; Parnell, S R; Wild, J M
2010-06-01
Models of lung acinar geometry have been proposed to analytically describe the diffusion of (3)He in the lung (as measured with pulsed gradient spin echo (PGSE) methods) as a possible means of characterizing lung microstructure from measurement of the (3)He ADC. In this work, major limitations in these analytical models are highlighted in simple diffusion weighted experiments with (3)He in cylindrical models of known geometry. The findings are substantiated with numerical simulations based on the same geometry using finite difference representation of the Bloch-Torrey equation. The validity of the existing "cylinder model" is discussed in terms of the physical diffusion regimes experienced and the basic reliance of the cylinder model and other ADC-based approaches on a Gaussian diffusion behaviour is highlighted. The results presented here demonstrate that physical assumptions of the cylinder model are not valid for large diffusion gradient strengths (above approximately 15 mT/m), which are commonly used for (3)He ADC measurements in human lungs. (c) 2010 Elsevier Inc. All rights reserved.
Large-scale retrieval for medical image analytics: A comprehensive review.
Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting
2018-01-01
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Torque-based optimal acceleration control for electric vehicle
NASA Astrophysics Data System (ADS)
Lu, Dongbin; Ouyang, Minggao
2014-03-01
The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.
Modelling of non-equilibrium flow in the branched pipeline systems
NASA Astrophysics Data System (ADS)
Sumskoi, S. I.; Sverchkov, A. M.; Lisanov, M. V.; Egorov, A. F.
2016-09-01
This article presents a mathematical model and a numerical method for solving the task of water hammer in the branched pipeline system. The task is considered in the onedimensional non-stationary formulation taking into account the realities such as the change in the diameter of the pipeline and its branches. By comparison with the existing analytic solution it has been shown that the proposed method possesses good accuracy. With the help of the developed model and numerical method the task has been solved concerning the transmission of the compression waves complex in the branching pipeline system when several shut down valves operate. It should be noted that the offered model and method may be easily introduced to a number of other tasks, for example, to describe the flow of blood in the vessels.
Dawson, V.K.; Meinertz, J.R.; Schmidt, L.J.; Gingerich, W.H.
2003-01-01
Concentrations of chloramine-T must be monitored during experimental treatments of fish when studying the effectiveness of the drug for controlling bacterial gill disease. A surrogate analytical method for analysis of chloramine-T to replace the existing high-performance liquid chromatography (HPLC) method is described. A surrogate method was needed because the existing HPLC method is expensive, requires a specialist to use, and is not generally available at fish hatcheries. Criteria for selection of a replacement method included ease of use, analysis time, cost, safety, sensitivity, accuracy, and precision. The most promising approach was to use the determination of chlorine concentrations as an indicator of chloramine-T. Of the currently available methods for analysis of chlorine, the DPD (N,N-diethyl-p-phenylenediamine) colorimetric method best fit the established criteria. The surrogate method was evaluated under a variety of water quality conditions. Regression analysis of all DPD colorimetric analyses with the HPLC values produced a linear model (Y=0.9602 X+0.1259) with an r2 value of 0.9960. The average accuracy (percent recovery) of the DPD method relative to the HPLC method for the combined set of water quality data was 101.5%. The surrogate method was also evaluated with chloramine-T solutions that contained various concentrations of fish feed or selected densities of rainbow trout. When samples were analyzed within 2 h, the results of the surrogate method were consistent with those of the HPLC method. When samples with high concentrations of organic material were allowed to age more than 2 h before being analyzed, the DPD method seemed to be susceptible to interference, possibly from the development of other chloramine compounds. However, even after aging samples 6 h, the accuracy of the surrogate DPD method relative to the HPLC method was within the range of 80-120%. Based on the data comparing the two methods, the U.S. Food and Drug Administration has concluded that the DPD colorimetric method is appropriate to use to measure chloramine-T in water during pivotal efficacy trials designed to support the approval of chloramine-T for use in fish culture. ?? 2003 Elsevier Science B.V. All rights reserved.
Strain gage measurement errors in the transient heating of structural components
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1993-01-01
Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.
Brownian dynamics without Green's functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delong, Steven; Donev, Aleksandar, E-mail: donev@courant.nyu.edu; Usabiaga, Florencio Balboa
2014-04-07
We develop a Fluctuating Immersed Boundary (FIB) method for performing Brownian dynamics simulations of confined particle suspensions. Unlike traditional methods which employ analytical Green's functions for Stokes flow in the confined geometry, the FIB method uses a fluctuating finite-volume Stokes solver to generate the action of the response functions “on the fly.” Importantly, we demonstrate that both the deterministic terms necessary to capture the hydrodynamic interactions among the suspended particles, as well as the stochastic terms necessary to generate the hydrodynamically correlated Brownian motion, can be generated by solving the steady Stokes equations numerically only once per time step. Thismore » is accomplished by including a stochastic contribution to the stress tensor in the fluid equations consistent with fluctuating hydrodynamics. We develop novel temporal integrators that account for the multiplicative nature of the noise in the equations of Brownian dynamics and the strong dependence of the mobility on the configuration for confined systems. Notably, we propose a random finite difference approach to approximating the stochastic drift proportional to the divergence of the configuration-dependent mobility matrix. Through comparisons with analytical and existing computational results, we numerically demonstrate the ability of the FIB method to accurately capture both the static (equilibrium) and dynamic properties of interacting particles in flow.« less
Kumar, B Vinodh; Mohan, Thuthi
2018-01-01
Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2014 CFR
2014-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2012 CFR
2012-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2012-04-01 2012-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2013 CFR
2013-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
Huang, Lei; Xue, Junpeng; Gao, Bo; ...
2016-12-21
In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Xue, Junpeng; Gao, Bo
In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less
Mixed Initiative Visual Analytics Using Task-Driven Recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Cramer, Nicholas O.; Israel, David
2015-12-07
Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less
Harries, Bruce; Filiatrault, Lyne; Abu-Laban, Riyad B
2018-05-30
Quality improvement (QI) analytic methodology is rarely encountered in the emergency medicine literature. We sought to comparatively apply QI design and analysis techniques to an existing data set, and discuss these techniques as an alternative to standard research methodology for evaluating a change in a process of care. We used data from a previously published randomized controlled trial on triage-nurse initiated radiography using the Ottawa ankle rules (OAR). QI analytic tools were applied to the data set from this study and evaluated comparatively against the original standard research methodology. The original study concluded that triage nurse-initiated radiographs led to a statistically significant decrease in mean emergency department length of stay. Using QI analytic methodology, we applied control charts and interpreted the results using established methods that preserved the time sequence of the data. This analysis found a compelling signal of a positive treatment effect that would have been identified after the enrolment of 58% of the original study sample, and in the 6th month of this 11-month study. Our comparative analysis demonstrates some of the potential benefits of QI analytic methodology. We found that had this approach been used in the original study, insights regarding the benefits of nurse-initiated radiography using the OAR would have been achieved earlier, and thus potentially at a lower cost. In situations where the overarching aim is to accelerate implementation of practice improvement to benefit future patients, we believe that increased consideration should be given to the use of QI analytic methodology.
Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation
Marzi Khosrowshahi, Elnaz; Razmi, Habib
2018-02-08
A green biocomposite of sunflower stalks and graphitic carbon nitride nanosheets has been applied as a solid-phase extraction adsorbent for sample preparation of five polycyclic aromatic hydrocarbons in different solutions using high-performance liquid chromatography with ultraviolet detection. Before the modification, sunflower stalks exhibited relatively low adsorption to the polycyclic aromatic hydrocarbons extraction. The modified sunflower stalks showed increased adsorption to the analytes extraction due to the increase in surface and existence of a π-π interaction between the analytes and graphitic carbon nitride nanosheets on the surface. Under the optimal conditions, the limits of detection and quantification for five polycyclic aromatic hydrocarbons compounds could reach 0.4-32 and 1.2-95 ng/L, respectively. The method accuracy was evaluated using recovery measurements in spiked real samples and good recoveries from 71 to 115% with relative standard deviations of <10% have been achieved. The developed method was successfully applied for polycyclic aromatic hydrocarbons determination in various samples-well water, tap water, soil, vegetable, and barbequed meat (kebab)-with analytes contents ranging from 0.065 to 13.3 μg/L. The prepared green composite as a new sorbent has some advantages including ease of preparation, low cost, and good reusability. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
VALIDATION OF STANDARD ANALYTICAL PROTOCOL FOR ...
There is a growing concern with the potential for terrorist use of chemical weapons to cause civilian harm. In the event of an actual or suspected outdoor release of chemically hazardous material in a large area, the extent of contamination must be determined. This requires a system with the ability to prepare and quickly analyze a large number of contaminated samples for the traditional chemical agents, as well as numerous toxic industrial chemicals. Liquid samples (both aqueous and organic), solid samples (e.g., soil), vapor samples (e.g., air) and mixed state samples, all ranging from household items to deceased animals, may require some level of analyses. To meet this challenge, the U.S. Environmental Protection Agency (U.S. EPA) National Homeland Security Research Center, in collaboration with experts from across U.S. EPA and other Federal Agencies, initiated an effort to identify analytical methods for the chemical and biological agents that could be used to respond to a terrorist attack or a homeland security incident. U.S. EPA began development of standard analytical protocols (SAPs) for laboratory identification and measurement of target agents in case of a contamination threat. These methods will be used to help assist in the identification of existing contamination, the effectiveness of decontamination, as well as clearance for the affected population to reoccupy previously contaminated areas. One of the first SAPs developed was for the determin
Potyrailo, Radislav A
2017-08-29
For detection of gases and vapors in complex backgrounds, "classic" analytical instruments are an unavoidable alternative to existing sensors. Recently a new generation of sensors, known as multivariable sensors, emerged with a fundamentally different perspective for sensing to eliminate limitations of existing sensors. In multivariable sensors, a sensing material is designed to have diverse responses to different gases and vapors and is coupled to a multivariable transducer that provides independent outputs to recognize these diverse responses. Data analytics tools provide rejection of interferences and multi-analyte quantitation. This review critically analyses advances of multivariable sensors based on ligand-functionalized metal nanoparticles also known as monolayer-protected nanoparticles (MPNs). These MPN sensing materials distinctively stand out from other sensing materials for multivariable sensors due to their diversity of gas- and vapor-response mechanisms as provided by organic and biological ligands, applicability of these sensing materials for broad classes of gas-phase compounds such as condensable vapors and non-condensable gases, and for several principles of signal transduction in multivariable sensors that result in non-resonant and resonant electrical sensors as well as material- and structure-based photonic sensors. Such features should allow MPN multivariable sensors to be an attractive high value addition to existing analytical instrumentation.
Laboratories measuring target pathogen analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select pathogens.
Yanzhen Wu; Hu, A P; Budgett, D; Malpas, S C; Dissanayake, T
2011-06-01
Transcutaneous energy transfer (TET) enables the transfer of power across the skin without direct electrical connection. It is a mechanism for powering implantable devices for the lifetime of a patient. For maximum power transfer, it is essential that TET systems be resonant on both the primary and secondary sides, which requires considerable design effort. Consequently, a strong need exists for an efficient method to aid the design process. This paper presents an analytical technique appropriate to analyze complex TET systems. The system's steady-state solution in closed form with sufficient accuracy is obtained by employing the proposed equivalent small parameter method. It is shown that power-transfer capability can be correctly predicted without tedious iterative simulations or practical measurements. Furthermore, for TET systems utilizing a current-fed push-pull soft switching resonant converter, it is found that the maximum energy transfer does not occur when the primary and secondary resonant tanks are "tuned" to the nominal resonant frequency. An optimal turning point exists, corresponding to the system's maximum power-transfer capability when optimal tuning capacitors are applied.
Analytic representation of FK/Fπ in two loop chiral perturbation theory
NASA Astrophysics Data System (ADS)
Ananthanarayan, B.; Bijnens, Johan; Friot, Samuel; Ghosh, Shayan
2018-05-01
We present an analytic representation of FK/Fπ as calculated in three-flavor two-loop chiral perturbation theory, which involves expressing three mass scale sunsets in terms of Kampé de Fériet series. We demonstrate how approximations may be made to obtain relatively compact analytic representations. An illustrative set of fits using lattice data is also presented, which shows good agreement with existing fits.
An integrative framework for sensor-based measurement of teamwork in healthcare.
Rosen, Michael A; Dietz, Aaron S; Yang, Ting; Priebe, Carey E; Pronovost, Peter J
2015-01-01
There is a strong link between teamwork and patient safety. Emerging evidence supports the efficacy of teamwork improvement interventions. However, the availability of reliable, valid, and practical measurement tools and strategies is commonly cited as a barrier to long-term sustainment and spread of these teamwork interventions. This article describes the potential value of sensor-based technology as a methodology to measure and evaluate teamwork in healthcare. The article summarizes the teamwork literature within healthcare, including team improvement interventions and measurement. Current applications of sensor-based measurement of teamwork are reviewed to assess the feasibility of employing this approach in healthcare. The article concludes with a discussion highlighting current application needs and gaps and relevant analytical techniques to overcome the challenges to implementation. Compelling studies exist documenting the feasibility of capturing a broad array of team input, process, and output variables with sensor-based methods. Implications of this research are summarized in a framework for development of multi-method team performance measurement systems. Sensor-based measurement within healthcare can unobtrusively capture information related to social networks, conversational patterns, physical activity, and an array of other meaningful information without having to directly observe or periodically survey clinicians. However, trust and privacy concerns present challenges that need to be overcome through engagement of end users in healthcare. Initial evidence exists to support the feasibility of sensor-based measurement to drive feedback and learning across individual, team, unit, and organizational levels. Future research is needed to refine methods, technologies, theory, and analytical strategies. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliations see end of article.
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...
The Coordinate Orthogonality Check (corthog)
NASA Astrophysics Data System (ADS)
Avitabile, P.; Pechinsky, F.
1998-05-01
A new technique referred to as the coordinate orthogonality check (CORTHOG) helps to identify how each physical degree of freedom contributes to the overall orthogonality relationship between analytical and experimental modal vectors on a mass-weighted basis. Using the CORTHOG technique together with the pseudo-orthogonality check (POC) clarifies where potential discrepancies exist between the analytical and experimental modal vectors. CORTHOG improves the understanding of the correlation (or lack of correlation) that exists between modal vectors. The CORTHOG theory is presented along with the evaluation of several cases to show the use of the technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajpathak, Bhooshan, E-mail: bhooshan@ee.iitb.ac.in; Pillai, Harish K., E-mail: hp@ee.iitb.ac.in; Bandyopadhyay, Santanu, E-mail: santanu@me.iitb.ac.in
2015-10-15
In this paper, we analytically examine the unstable periodic orbits and chaotic orbits of the 1-D linear piecewise-smooth discontinuous map. We explore the existence of unstable orbits and the effect of variation in parameters on the coexistence of unstable orbits. Further, we show that this structuring is different from the well known period adding cascade structure associated with the stable periodic orbits of the same map. Further, we analytically prove the existence of chaotic orbit for this map.
Laboratories measuring target biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select biotoxins.
Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery
MASTER ANALYTICAL SCHEME FOR ORGANIC COMPOUNDS IN WATER: PART 1. PROTOCOLS
A Master Analytical Scheme (MAS) has been developed for the analysis of volatile (gas chromatographable) organic compounds in water. In developing the MAS, it was necessary to evaluate and modify existing analysis procedures and develop new techniques to produce protocols that pr...
Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.
2014-01-01
Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.
Geometric model of pseudo-distance measurement in satellite location systems
NASA Astrophysics Data System (ADS)
Panchuk, K. L.; Lyashkov, A. A.; Lyubchinov, E. V.
2018-04-01
The existing mathematical model of pseudo-distance measurement in satellite location systems does not provide a precise solution of the problem, but rather an approximate one. The existence of such inaccuracy, as well as bias in measurement of distance from satellite to receiver, results in inaccuracy level of several meters. Thereupon, relevance of refinement of the current mathematical model becomes obvious. The solution of the system of quadratic equations used in the current mathematical model is based on linearization. The objective of the paper is refinement of current mathematical model and derivation of analytical solution of the system of equations on its basis. In order to attain the objective, geometric analysis is performed; geometric interpretation of the equations is given. As a result, an equivalent system of equations, which allows analytical solution, is derived. An example of analytical solution implementation is presented. Application of analytical solution algorithm to the problem of pseudo-distance measurement in satellite location systems allows to improve the accuracy such measurements.
Gebauer, Petr; Malá, Zdena; Boček, Petr
2014-03-01
This contribution is the third part of the project on strategies used in the selection and tuning of electrolyte systems for anionic ITP with ESI-MS detection. The strategy presented here is based on the creation of self-maintained ITP subsystems in moving-boundary systems and describes two new principal approaches offering physical separation of analyte zones from their common ITP stack and/or simultaneous selective stacking of two different analyte groups. Both strategic directions are based on extending the number of components forming the electrolyte system by adding a third suitable anion. The first method is the application of the spacer technique to moving-boundary anionic ITP systems, the second method is a technique utilizing a moving-boundary ITP system in which two ITP subsystems exist and move with mutually different velocities. It is essential for ESI detection that both methods can be based on electrolyte systems containing only several simple chemicals, such as simple volatile organic acids (formic and acetic) and their ammonium salts. The properties of both techniques are defined theoretically and discussed from the viewpoint of their applicability to trace analysis by ITP-ESI-MS. Examples of system design for selected model separations of preservatives and pharmaceuticals illustrate the validity of the theoretical model and application potential of the proposed techniques by both computer simulations and experiments. Both new methods enhance the application range of ITP-MS and may be beneficial particularly for complex multicomponent samples or for analytes with identical molecular mass. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Groves, Ethan; Palenik, Skip; Palenik, Christopher S
2018-04-18
While color is arguably the most important optical property of evidential fibers, the actual dyestuffs responsible for its expression in them are, in forensic trace evidence examinations, rarely analyzed and still less often identified. This is due, primarily, to the exceedingly small quantities of dye present in a single fiber as well as to the fact that dye identification is a challenging analytical problem, even when large quantities are available for analysis. Among the practical reasons for this are the wide range of dyestuffs available (and the even larger number of trade names), the low total concentration of dyes in the finished product, the limited amount of sample typically available for analysis in forensic cases, and the complexity of the dye mixtures that may exist within a single fiber. Literature on the topic of dye analysis is often limited to a specific method, subset of dyestuffs, or an approach that is not applicable given the constraints of a forensic analysis. Here, we present a generalized approach to dye identification that ( 1 ) combines several robust analytical methods, ( 2 ) is broadly applicable to a wide range of dye chemistries, application classes, and fiber types, and ( 3 ) can be scaled down to forensic casework-sized samples. The approach is based on the development of a reference collection of 300 commercially relevant textile dyes that have been characterized by a variety of microanalytical methods (HPTLC, Raman microspectroscopy, infrared microspectroscopy, UV-Vis spectroscopy, and visible microspectrophotometry). Although there is no single approach that is applicable to all dyes on every type of fiber, a combination of these analytical methods has been applied using a reproducible approach that permits the use of reference libraries to constrain the identity of and, in many cases, identify the dye (or dyes) present in a textile fiber sample.
Ramanujan, Devarajan; Bernstein, William Z; Chandrasegaran, Senthil K; Ramani, Karthik
2017-01-01
The rapid rise in technologies for data collection has created an unmatched opportunity to advance the use of data-rich tools for lifecycle decision-making. However, the usefulness of these technologies is limited by the ability to translate lifecycle data into actionable insights for human decision-makers. This is especially true in the case of sustainable lifecycle design (SLD), as the assessment of environmental impacts, and the feasibility of making corresponding design changes, often relies on human expertise and intuition. Supporting human sense-making in SLD requires the use of both data-driven and user-driven methods while exploring lifecycle data. A promising approach for combining the two is through the use of visual analytics (VA) tools. Such tools can leverage the ability of computer-based tools to gather, process, and summarize data along with the ability of human-experts to guide analyses through domain knowledge or data-driven insight. In this paper, we review previous research that has created VA tools in SLD. We also highlight existing challenges and future opportunities for such tools in different lifecycle stages-design, manufacturing, distribution & supply chain, use-phase, end-of-life, as well as life cycle assessment. Our review shows that while the number of VA tools in SLD is relatively small, researchers are increasingly focusing on the subject matter. Our review also suggests that VA tools can address existing challenges in SLD and that significant future opportunities exist.
Berry, Christopher M; Zhao, Peng
2015-01-01
Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study's results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans. (c) 2015 APA, all rights reserved.
Greenberg, David A.
2011-01-01
Computer simulation methods are under-used tools in genetic analysis because simulation approaches have been portrayed as inferior to analytic methods. Even when simulation is used, its advantages are not fully exploited. Here, I present SHIMSHON, our package of genetic simulation programs that have been developed, tested, used for research, and used to generated data for Genetic Analysis Workshops (GAW). These simulation programs, now web-accessible, can be used by anyone to answer questions about designing and analyzing genetic disease studies for locus identification. This work has three foci: (1) the historical context of SHIMSHON's development, suggesting why simulation has not been more widely used so far. (2) Advantages of simulation: computer simulation helps us to understand how genetic analysis methods work. It has advantages for understanding disease inheritance and methods for gene searches. Furthermore, simulation methods can be used to answer fundamental questions that either cannot be answered by analytical approaches or cannot even be defined until the problems are identified and studied, using simulation. (3) I argue that, because simulation was not accepted, there was a failure to grasp the meaning of some simulation-based studies of linkage. This may have contributed to perceived weaknesses in linkage analysis; weaknesses that did not, in fact, exist. PMID:22189467
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekmekcioglu, Mehmet, E-mail: meceng3584@yahoo.co; Kaya, Tolga; Kahraman, Cengiz
The use of fuzzy multiple criteria analysis (MCA) in solid waste management has the advantage of rendering subjective and implicit decision making more objective and analytical, with its ability to accommodate both quantitative and qualitative data. In this paper a modified fuzzy TOPSIS methodology is proposed for the selection of appropriate disposal method and site for municipal solid waste (MSW). Our method is superior to existing methods since it has capability of representing vague qualitative data and presenting all possible results with different degrees of membership. In the first stage of the proposed methodology, a set of criteria of cost,more » reliability, feasibility, pollution and emission levels, waste and energy recovery is optimized to determine the best MSW disposal method. Landfilling, composting, conventional incineration, and refuse-derived fuel (RDF) combustion are the alternatives considered. The weights of the selection criteria are determined by fuzzy pairwise comparison matrices of Analytic Hierarchy Process (AHP). It is found that RDF combustion is the best disposal method alternative for Istanbul. In the second stage, the same methodology is used to determine the optimum RDF combustion plant location using adjacent land use, climate, road access and cost as the criteria. The results of this study illustrate the importance of the weights on the various factors in deciding the optimized location, with the best site located in Catalca. A sensitivity analysis is also conducted to monitor how sensitive our model is to changes in the various criteria weights.« less
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
NASA Astrophysics Data System (ADS)
Asadpour-Zeynali, Karim; Bastami, Mohammad
2010-02-01
In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.
Preliminary Tests For Development Of A Non-Pertechnetate Analysis Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diprete, D.; McCabe, D.
2016-09-28
The objective of this task was to develop a non-pertechnetate analysis method that 222-S lab could easily implement. The initial scope involved working with 222-S laboratory personnel to adapt the existing Tc analytical method to fractionate the non-pertechnetate and pertechnetate. SRNL then developed and tested a method using commercial sorbents containing Aliquat ® 336 to extract the pertechnetate (thereby separating it from non-pertechnetate), followed by oxidation, extraction, and stripping steps, and finally analysis by beta counting and Mass Spectroscopy. Several additional items were partially investigated, including impacts of a 137Cs removal step. The method was initially tested on SRS tankmore » waste samples to determine its viability. Although SRS tank waste does not contain non-pertechnetate, testing with it was useful to investigate the compatibility, separation efficiency, interference removal efficacy, and method sensitivity.« less
Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.
Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J
2016-01-01
Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karakaya, Mahmut; Qi, Hairong
This paper addresses the communication and energy efficiency in collaborative visual sensor networks (VSNs) for people localization, a challenging computer vision problem of its own. We focus on the design of a light-weight and energy efficient solution where people are localized based on distributed camera nodes integrating the so-called certainty map generated at each node, that records the target non-existence information within the camera s field of view. We first present a dynamic itinerary for certainty map integration where not only each sensor node transmits a very limited amount of data but that a limited number of camera nodes ismore » involved. Then, we perform a comprehensive analytical study to evaluate communication and energy efficiency between different integration schemes, i.e., centralized and distributed integration. Based on results obtained from analytical study and real experiments, the distributed method shows effectiveness in detection accuracy as well as energy and bandwidth efficiency.« less
Need total sulfur content? Use chemiluminescence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubala, S.W.; Campbell, D.N.; DiSanzo, F.P.
Regulations issued by the United States Environmental Protection Agency require petroleum refineries to reduce or control the amount of total sulfur present in their refined products. These legislative requirements have led many refineries to search for online instrumentation that can produce accurate and repeatable total sulfur measurements within allowed levels. Several analytical methods currently exist to measure total sulfur content. They include X-ray fluorescence (XRF), microcoulometry, lead acetate tape, and pyrofluorescence techniques. Sulfur-specific chemiluminescence detection (SSCD) has recently received much attention due to its linearity, selectivity, sensitivity, and equimolar response. However, its use has been largely confined to the areamore » of gas chromatography. This article focuses on the special design considerations and analytical utility of an SSCD system developed to determine total sulfur content in gasoline. The system exhibits excellent linearity and selectivity, the ability to detect low minimum levels, and an equimolar response to various sulfur compounds. 2 figs., 2 tabs.« less
Constraints on the [Formula: see text] form factor from analyticity and unitarity.
Ananthanarayan, B; Caprini, I; Kubis, B
Motivated by the discrepancies noted recently between the theoretical calculations of the electromagnetic [Formula: see text] form factor and certain experimental data, we investigate this form factor using analyticity and unitarity in a framework known as the method of unitarity bounds. We use a QCD correlator computed on the spacelike axis by operator product expansion and perturbative QCD as input, and exploit unitarity and the positivity of its spectral function, including the two-pion contribution that can be reliably calculated using high-precision data on the pion form factor. From this information, we derive upper and lower bounds on the modulus of the [Formula: see text] form factor in the elastic region. The results provide a significant check on those obtained with standard dispersion relations, confirming the existence of a disagreement with experimental data in the region around [Formula: see text].
Predicting Upscaled Behavior of Aqueous Reactants in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Wright, E. E.; Hansen, S. K.; Bolster, D.; Richter, D. H.; Vesselinov, V. V.
2017-12-01
When modeling reactive transport, reaction rates are often overestimated due to the improper assumption of perfect mixing at the support scale of the transport model. In reality, fronts tend to form between participants in thermodynamically favorable reactions, leading to segregation of reactants into islands or fingers. When such a configuration arises, reactions are limited to the interface between the reactive solutes. Closure methods for estimating control-volume-effective reaction rates in terms of quantities defined at the control volume scale do not presently exist, but their development is crucial for effective field-scale modeling. We attack this problem through a combination of analytical and numerical means. Specifically, we numerically study reactive transport through an ensemble of realizations of two-dimensional heterogeneous porous media. We then employ regression analysis to calibrate an analytically-derived relationship between reaction rate and various dimensionless quantities representing conductivity-field heterogeneity and the respective strengths of diffusion, reaction and advection.
NASA Astrophysics Data System (ADS)
Hong, Y.; Curteza, A.; Zeng, X.; Bruniaux, P.; Chen, Y.
2016-06-01
Material selection is the most difficult section in the customized garment product design and development process. This study aims to create a hierarchical framework for material selection. The analytic hierarchy process and fuzzy sets theories have been applied to mindshare the diverse requirements from the customer and inherent interaction/interdependencies among these requirements. Sensory evaluation ensures a quick and effective selection without complex laboratory test such as KES and FAST, using the professional knowledge of the designers. A real empirical application for the physically disabled people is carried out to demonstrate the proposed method. Both the theoretical and practical background of this paper have indicated the fuzzy analytical network process can capture expert's knowledge existing in the form of incomplete, ambiguous and vague information for the mutual influence on attribute and criteria of the material selection.
Zamani Nejad, Mohammad; Jabbari, Mehdi; Ghannad, Mehdi
2014-01-01
Using disk form multilayers, a semi-analytical solution has been derived for determination of displacements and stresses in a rotating cylindrical shell with variable thickness under uniform pressure. The thick cylinder is divided into disk form layers form with their thickness corresponding to the thickness of the cylinder. Due to the existence of shear stress in the thick cylindrical shell with variable thickness, the equations governing disk layers are obtained based on first-order shear deformation theory (FSDT). These equations are in the form of a set of general differential equations. Given that the cylinder is divided into n disks, n sets of differential equations are obtained. The solution of this set of equations, applying the boundary conditions and continuity conditions between the layers, yields displacements and stresses. A numerical solution using finite element method (FEM) is also presented and good agreement was found.
Zamani Nejad, Mohammad; Jabbari, Mehdi; Ghannad, Mehdi
2014-01-01
Using disk form multilayers, a semi-analytical solution has been derived for determination of displacements and stresses in a rotating cylindrical shell with variable thickness under uniform pressure. The thick cylinder is divided into disk form layers form with their thickness corresponding to the thickness of the cylinder. Due to the existence of shear stress in the thick cylindrical shell with variable thickness, the equations governing disk layers are obtained based on first-order shear deformation theory (FSDT). These equations are in the form of a set of general differential equations. Given that the cylinder is divided into n disks, n sets of differential equations are obtained. The solution of this set of equations, applying the boundary conditions and continuity conditions between the layers, yields displacements and stresses. A numerical solution using finite element method (FEM) is also presented and good agreement was found. PMID:24719582
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, F.; Nehl, T.W.
1998-09-01
Because of their high efficiency and power density the PM brushless dc motor is a strong candidate for electric and hybrid vehicle propulsion systems. An analytical approach is developed to predict the inverter high frequency pulse width modulation (PWM) switching caused eddy-current losses in a permanent magnet brushless dc motor. The model uses polar coordinates to take curvature effects into account, and is also capable of including the space harmonic effect of the stator magnetic field and the stator lamination effect on the losses. The model was applied to an existing motor design and was verified with the finite elementmore » method. Good agreement was achieved between the two approaches. Hence, the model is expected to be very helpful in predicting PWM switching losses in permanent magnet machine design.« less
Analytical Characterization of Erythritol Tetranitrate, an Improvised Explosive.
Matyáš, Robert; Lyčka, Antonín; Jirásko, Robert; Jakový, Zdeněk; Maixner, Jaroslav; Mišková, Linda; Künzel, Martin
2016-05-01
Erythritol tetranitrate (ETN), an ester of nitric acid and erythritol, is a solid crystalline explosive with high explosive performance. Although it has never been used in any industrial or military application, it has become one of the most prepared and misused improvise explosives. In this study, several analytical techniques were explored to facilitate analysis in forensic laboratories. FTIR and Raman spectrometry measurements expand existing data and bring more detailed assignment of bands through the parallel study of erythritol [(15) N4 ] tetranitrate. In the case of powder diffraction, recently published data were verified, and (1) H, (13) C, and (15) N NMR spectra are discussed in detail. The technique of electrospray ionization tandem mass spectrometry was successfully used for the analysis of ETN. Described methods allow fast, versatile, and reliable detection or analysis of samples containing erythritol tetranitrate in forensic laboratories. © 2016 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath
2018-07-01
One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.
Analytic analysis of auxetic metamaterials through analogy with rigid link systems
NASA Astrophysics Data System (ADS)
Rayneau-Kirkhope, Daniel; Zhang, Chengzhao; Theran, Louis; Dias, Marcelo A.
2018-02-01
In recent years, many structural motifs have been designed with the aim of creating auxetic metamaterials. One area of particular interest in this subject is the creation of auxetic material properties through elastic instability. Such metamaterials switch from conventional behaviour to an auxetic response for loads greater than some threshold value. This paper develops a novel methodology in the analysis of auxetic metamaterials which exhibit elastic instability through analogy with rigid link lattice systems. The results of our analytic approach are confirmed by finite-element simulations for both the onset of elastic instability and post-buckling behaviour including Poisson's ratio. The method gives insight into the relationships between mechanisms within lattices and their mechanical behaviour; as such, it has the potential to allow existing knowledge of rigid link lattices with auxetic paths to be used in the design of future buckling-induced auxetic metamaterials.
Soylak, Mustafa; Unsal, Yunus Emre
2011-10-01
A preconcentration-separation procedure has been established based on solid-phase extraction of Fe(III) and Pb(II) on bucky tubes (BTs) disc. Fe(III) and Pb(II) ions were quantitatively recovered at pH 6. The influences of the analytical parameters like sample volume, flow rates on the recoveries of analytes on BT disc were investigated. The effects of co-existing ions on the recoveries were also studied. The detection limits for iron and lead were found 1.6 and 4.9 μg L⁻¹, respectively. The validation of the presented method was checked by the analysis of TMDA-51.3 fortified water certified reference material. The presented procedure was successfully applied to the separation-preconcentration and determination of iron and lead content of some natural water and herbal plant samples from Kayseri, Turkey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenbach, Markus; Li, Ying Wai
We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage ofmore » avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.« less
Patient decision making among older individuals with cancer.
Strohschein, Fay J; Bergman, Howard; Carnevale, Franco A; Loiselle, Carmen G
2011-07-01
Patient decision making is an area of increasing inquiry. For older individuals experiencing cancer, variations in health and functional status, physiologic aspects of aging, and tension between quality and quantity of life present unique challenges to treatment-related decision making. We used the pragmatic utility method to analyze the concept of patient decision making in the context of older individuals with cancer. We first evaluated its maturity in existing literature and then posed analytical questions to clarify aspects found to be only partially mature. In this context, we found patient decision making to be an ongoing process, changing with time, reflecting individual and relational components, as well as analytical and emotional ones. Assumptions frequently associated with patient decision making were not consistent with the empirical literature. Careful attention to the multifaceted components of patient decision making among older individuals with cancer provides guidance for research, supportive interventions, and targeted follow-up care.
Parker, Andrew M.; Stone, Eric R.
2013-01-01
One of the most common findings in behavioral decision research is that people have unrealistic beliefs about how much they know. However, demonstrating that misplaced confidence exists does not necessarily mean that there are costs to it. This paper contrasts two approaches toward answering whether misplaced confidence is good or bad, which we have labeled the overconfidence and unjustified confidence approach. We first consider conceptual and analytic issues distinguishing these approaches. Then, we provide findings from a set of simulations designed to determine when the approaches produce different conclusions across a range of possible confidence-knowledge-outcome relationships. Finally, we illustrate the main findings from the simulations with three empirical examples drawn from our own data. We conclude that the unjustified confidence approach is typically the preferred approach, both because it is appropriate for testing a larger set of psychological mechanisms as well as for methodological reasons. PMID:25309037
Analytical techniques for characterization of cyclodextrin complexes in the solid state: A review.
Mura, Paola
2015-09-10
Cyclodextrins are cyclic oligosaccharides able to form inclusion complexes with a variety of hydrophobic guest molecules, positively modifying their physicochemical properties. A thorough analytical characterization of cyclodextrin complexes is of fundamental importance to provide an adequate support in selection of the most suitable cyclodextrin for each guest molecule, and also in view of possible future patenting and marketing of drug-cyclodextrin formulations. The demonstration of the actual formation of a drug-cyclodextrin inclusion complex in solution does not guarantee its existence also in the solid state. Moreover, the technique used to prepare the solid complex can strongly influence the properties of the final product. Therefore, an appropriate characterization of the drug-cyclodextrin solid systems obtained has also a key role in driving in the choice of the most effective preparation method, able to maximize host-guest interactions. The analytical characterization of drug-cyclodextrin solid systems and the assessment of the actual inclusion complex formation is not a simple task and involves the combined use of several analytical techniques, whose results have to be evaluated together. The objective of the present review is to present a general prospect of the principal analytical techniques which can be employed for a suitable characterization of drug-cyclodextrin systems in the solid state, evidencing their respective potential advantages and limits. The applications of each examined technique are described and discussed by pertinent examples from literature. Copyright © 2015 Elsevier B.V. All rights reserved.
Stability analysis of the Euler discretization for SIR epidemic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanto, Agus
2014-06-19
In this paper we consider a discrete SIR epidemic model obtained by the Euler method. For that discrete model, existence of disease free equilibrium and endemic equilibrium is established. Sufficient conditions on the local asymptotical stability of both disease free equilibrium and endemic equilibrium are also derived. It is found that the local asymptotical stability of the existing equilibrium is achieved only for a small time step size h. If h is further increased and passes the critical value, then both equilibriums will lose their stability. Our numerical simulations show that a complex dynamical behavior such as bifurcation or chaosmore » phenomenon will appear for relatively large h. Both analytical and numerical results show that the discrete SIR model has a richer dynamical behavior than its continuous counterpart.« less
Magnetically induced rotor vibration in dual-stator permanent magnet motors
NASA Astrophysics Data System (ADS)
Xie, Bang; Wang, Shiyu; Wang, Yaoyao; Zhao, Zhifu; Xiu, Jie
2015-07-01
Magnetically induced vibration is a major concern in permanent magnet (PM) motors, which is especially true for dual-stator motors. This work develops a two-dimensional model of the rotor by using energy method, and employs this model to examine the rigid- and elastic-body vibrations induced by the inner stator tooth passage force and that by the outer. The analytical results imply that there exist three typical vibration modes. Their presence or absence depends on the combination of magnet/slot, force's frequency and amplitude, the relative position between two stators, and other structural parameters. The combination and relative position affect these modes via altering the force phase. The predicted results are verified by magnetic force wave analysis by finite element method (FEM) and comparison with the existing results. Potential directions are also given with the anticipation of bringing forth more interesting and useful findings. As an engineering application, the magnetically induced vibration can be first reduced via the combination and then a suitable relative position.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mcmanus, H.L.; Chamis, C.C.
1996-01-01
This report describes analytical methods for calculating stresses and damage caused by degradation of the matrix constituent in polymer matrix composite materials. Laminate geometry, material properties, and matrix degradation states are specified as functions of position and time. Matrix shrinkage and property changes are modeled as functions of the degradation states. The model is incorporated into an existing composite mechanics computer code. Stresses, strains, and deformations at the laminate, ply, and micro levels are calculated, and from these calculations it is determined if there is failure of any kind. The rationale for the model (based on published experimental work) ismore » presented, its integration into the laminate analysis code is outlined, and example results are given, with comparisons to existing material and structural data. The mechanisms behind the changes in properties and in surface cracking during long-term aging of polyimide matrix composites are clarified. High-temperature-material test methods are also evaluated.« less
Taylor, R. Andrew; Pare, Joseph R.; Venkatesh, Arjun K.; Mowafi, Hani; Melnick, Edward R.; Fleischman, William; Hall, M. Kennedy
2018-01-01
Objectives Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data–driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. Methods This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. Results There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB-65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons). Conclusions In this proof-of-concept study, a local big data–driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in-hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high-risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions. PMID:26679719
ERIC Educational Resources Information Center
Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette
2017-01-01
Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…
A Statistical Approach for Testing Cross-Phenotype Effects of Rare Variants
Broadaway, K. Alaine; Cutler, David J.; Duncan, Richard; Moore, Jacob L.; Ware, Erin B.; Jhun, Min A.; Bielak, Lawrence F.; Zhao, Wei; Smith, Jennifer A.; Peyser, Patricia A.; Kardia, Sharon L.R.; Ghosh, Debashis; Epstein, Michael P.
2016-01-01
Increasing empirical evidence suggests that many genetic variants influence multiple distinct phenotypes. When cross-phenotype effects exist, multivariate association methods that consider pleiotropy are often more powerful than univariate methods that model each phenotype separately. Although several statistical approaches exist for testing cross-phenotype effects for common variants, there is a lack of similar tests for gene-based analysis of rare variants. In order to fill this important gap, we introduce a statistical method for cross-phenotype analysis of rare variants using a nonparametric distance-covariance approach that compares similarity in multivariate phenotypes to similarity in rare-variant genotypes across a gene. The approach can accommodate both binary and continuous phenotypes and further can adjust for covariates. Our approach yields a closed-form test whose significance can be evaluated analytically, thereby improving computational efficiency and permitting application on a genome-wide scale. We use simulated data to demonstrate that our method, which we refer to as the Gene Association with Multiple Traits (GAMuT) test, provides increased power over competing approaches. We also illustrate our approach using exome-chip data from the Genetic Epidemiology Network of Arteriopathy. PMID:26942286
Synchronization of chaotic and nonchaotic oscillators: Application to bipolar disorder
NASA Astrophysics Data System (ADS)
Nono Dueyou Buckjohn, C.; Siewe Siewe, M.; Tchawoua, C.; Kofane, T. C.
2010-08-01
In this Letter, we use a synchronization scheme on two bipolar disorder models consisting of a strong nonlinear system with multiplicative excitation and a nonlinear oscillator without parametric harmonic forcing. The stability condition following our control function is analytically demonstrated using the Lyapunov theory and Routh-Hurwitz criteria, we then have the condition for the existence of a feedback gain matrix. A convenient demonstration of the accuracy of the method is complemented by the numerical simulations from which we illustrate the synchronized dynamics between the two non-identical bipolar disorder patients.
National geodetic satellite program, part 2
NASA Technical Reports Server (NTRS)
Schmid, H.
1977-01-01
Satellite geodesy and the creation of worldwide geodetic reference systems is discussed. The geometric description of the surface and the analytical description of the gravity field of the earth by means of worldwide reference systems, with the aid of satellite geodesy, are presented. A triangulation method based on photogrammetric principles is described in detail. Results are derived in the form of three dimensional models. These mathematical models represent the frame of reference into which one can fit the existing geodetic results from the various local datums, as well as future measurements.
Solubility Limits of Dibutyl Phosphoric Acid in Uranium Solutions at SRS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, M.C.; Pierce, R.A.; Ray, R.J.
1998-06-01
The Savannah River Site has enriched uranium (EU) solution which has been stored for almost 10 years since being purified in the second uranium cycle of the H area solvent extraction process. The concentrations in solution are {tilde 6} g/L U and about 0.1 M nitric acid. Residual tributylphosphate in the solutions has slowly hydrolyzed to form dibutyl phosphoric acid (HDBP) at concentrations averaging 50 mg/L. Uranium is known to form compounds with DBP which have limited solubility. The potential to form uranium-DBP solids raises a nuclear criticality safety issue. SRTC tests have shown that U-DBP solids will precipitate atmore » concentrations potentially attainable during storage of enriched uranium solutions. Evaporation of the existing EUS solution without additional acidification could result in the precipitation of U-DBP solids if DBP concentration in the resulting solution exceeds 110 ppm at ambient temperature. The same potential exists for evaporation of unwashed 1CU solutions. The most important variables of interest for present plant operations are HNO{sub 3} and DBP concentrations. Temperature is also an important variable controlling precipitation. The data obtained in these tests can be used to set operating and safety limits for the plant. It is recommended that the data for 0 degrees C with 0.5 M HNO{sub 3} be used for setting the limits. The limit would be 80 mg/L which is 3 standard deviations below the average of 86 observed in the tests. The data shows that super-saturation can occur when the DBP concentration is as much as 50 percent above the solubility limit. However, super-saturation cannot be relied on for maintaining nuclear criticality safety. The analytical method for determining DBP concentration in U solutions was improved so that analyses for a solution are accurate to within 10 percent. However, the overall uncertainty of results for periodic samples of the existing EUS solutions was only reduced slightly. Thus, sampling appears to be the largest portion of the uncertainty for EUS sample results, although the number of samples analyzed here is low which could contribution to higher uncertainty. The analytical method can be transferred to the plant analytical labs for more routine analysis of samples.« less
Numerical Polynomial Homotopy Continuation Method and String Vacua
Mehta, Dhagash
2011-01-01
Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less
Experimental and analytical research on the aerodynamics of wind driven turbines. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohrbach, C.; Wainauski, H.; Worobel, R.
1977-12-01
This aerodynamic research program was aimed at providing a reliable, comprehensive data base on a series of wind turbine models covering a broad range of the prime aerodynamic and geometric variables. Such data obtained under controlled laboratory conditions on turbines designed by the same method, of the same size, and tested in the same wind tunnel had not been available in the literature. Moreover, this research program was further aimed at providing a basis for evaluating the adequacy of existing wind turbine aerodynamic design and performance methodology, for assessing the potential of recent advanced theories and for providing a basismore » for further method development and refinement.« less
Subsonic flutter analysis addition to NASTRAN. [for use with CDC 6000 series digital computers
NASA Technical Reports Server (NTRS)
Doggett, R. V., Jr.; Harder, R. L.
1973-01-01
A subsonic flutter analysis capability has been developed for NASTRAN, and a developmental version of the program has been installed on the CDC 6000 series digital computers at the Langley Research Center. The flutter analysis is of the modal type, uses doublet lattice unsteady aerodynamic forces, and solves the flutter equations by using the k-method. Surface and one-dimensional spline functions are used to transform from the aerodynamic degrees of freedom to the structural degrees of freedom. Some preliminary applications of the method to a beamlike wing, a platelike wing, and a platelike wing with a folded tip are compared with existing experimental and analytical results.
An unsteady rotor/fuselage interaction method
NASA Technical Reports Server (NTRS)
Egolf, T. Alan; Lorber, Peter F.
1987-01-01
An analytical method has been developed to treat unsteady helicopter rotor, wake, and fuselage interaction aerodynamics. An existing lifting line/prescribed wake rotor analysis and a source panel fuselage analysis were modified to predict vibratory fuselage airloads. The analyses were coupled through the induced flow velocities of the rotor and wake on the fuselage and the fuselage on the rotor. A prescribed displacement technique was used to distort the rotor wake about the fuselage. Sensitivity studies were performed to determine the influence of wake and body geometry on the computed airloads. Predicted and measured mean and unsteady pressures on a cylindrical body in the wake of a two-bladed rotor were compared. Initial results show good qualitative agreement.
Ensemble of single quadrupolar nuclei in rotating solids: sidebands in NMR spectrum.
Kundla, Enn
2006-07-01
A novel way is proposed to describe the evolution of nuclear magnetic polarization and the induced NMR spectrum. In this method, the effect of a high-intensity external static magnetic field and the effects of proper Hamiltonian left over interaction components, which commute with the first, are taken into account simultaneously and equivalently. The method suits any concrete NMR problem. This brings forth the really existing details in the registered spectra, evoked by Hamiltonian secular terms, which may be otherwise smoothed due to approximate treatment of the effects of the secular terms. Complete analytical expressions are obtained describing the NMR spectra including the rotational sideband sets of single quadrupolar nuclei in rotating solids.
Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Dalton, Angela C.; Dale, Crystal
2014-06-01
Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less
Using Learning Analytics to Support Engagement in Collaborative Writing
ERIC Educational Resources Information Center
Liu, Ming; Pardo, Abelardo; Liu, Li
2017-01-01
Online collaborative writing tools provide an efficient way to complete a writing task. However, existing tools only focus on technological affordances and ignore the importance of social affordances in a collaborative learning environment. This article describes a learning analytic system that analyzes writing behaviors, and creates…
The position of the analyst as expert: yesterday and today.
Fresenius, W
2000-11-01
The interrelation between law and analytical chemistry 150 years ago is outlined, showing that similar problems to today already existed at that time. Examples of present-day cases of judicial investigations are given and consequences for the duty of the analytical chemist are discussed.
MASTER ANALYTICAL SCHEME FOR ORGANIC COMPOUNDS IN WATER. PART 2. APPENDICES TO PROTOCOLS
A Master Analytical Scheme (MAS) has been developed for the analysis of volatile (gas chromatographable) organic compounds in water. In developing the MAS, it was necessary to evaluate and modify existing analysis procedures and develop new techniques to produce protocols that pr...
NASA Astrophysics Data System (ADS)
Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili
2012-04-01
In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.
A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.
Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J
2015-05-01
In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.
Parametrization of local CR automorphisms by finite jets and applications
NASA Astrophysics Data System (ADS)
Lamel, Bernhard; Mir, Nordine
2007-04-01
For any real-analytic hypersurface Msubset {C}^N , which does not contain any complex-analytic subvariety of positive dimension, we show that for every point pin M the local real-analytic CR automorphisms of M fixing p can be parametrized real-analytically by their ell_p jets at p . As a direct application, we derive a Lie group structure for the topological group operatorname{Aut}(M,p) . Furthermore, we also show that the order ell_p of the jet space in which the group operatorname{Aut}(M,p) embeds can be chosen to depend upper-semicontinuously on p . As a first consequence, it follows that given any compact real-analytic hypersurface M in {C}^N , there exists an integer k depending only on M such that for every point pin M germs at p of CR diffeomorphisms mapping M into another real-analytic hypersurface in {C}^N are uniquely determined by their k -jet at that point. Another consequence is the following boundary version of H. Cartan's uniqueness theorem: given any bounded domain Ω with smooth real-analytic boundary, there exists an integer k depending only on partial Ω such that if H\\colon Ωto Ω is a proper holomorphic mapping extending smoothly up to partial Ω near some point pin partial Ω with the same k -jet at p with that of the identity mapping, then necessarily H=Id . Our parametrization theorem also holds for the stability group of any essentially finite minimal real-analytic CR manifold of arbitrary codimension. One of the new main tools developed in the paper, which may be of independent interest, is a parametrization theorem for invertible solutions of a certain kind of singular analytic equations, which roughly speaking consists of inverting certain families of parametrized maps with singularities.
NASA Astrophysics Data System (ADS)
Hu, Xian-Quan; Luo, Guang; Cui, Li-Peng; Li, Fang-Yu; Niu, Lian-Bin
2009-03-01
The analytic solution of the radial Schrödinger equation is studied by using the tight coupling condition of several positive-power and inverse-power potential functions in this article. Furthermore, the precisely analytic solutions and the conditions that decide the existence of analytic solution have been searched when the potential of the radial Schrödinger equation is V(r) = α1r8 + α2r3 + α3r2 + β3r-1 + β2r-3 + β1r-4. Generally speaking, there is only an approximate solution, but not analytic solution for Schrödinger equation with several potentials' superposition. However, the conditions that decide the existence of analytic solution have been found and the analytic solution and its energy level structure are obtained for the Schrödinger equation with the potential which is motioned above in this paper. According to the single-value, finite and continuous standard of wave function in a quantum system, the authors firstly solve the asymptotic solution through the radial coordinate r → and r → 0; secondly, they make the asymptotic solutions combining with the series solutions nearby the neighborhood of irregular singularities; and then they compare the power series coefficients, deduce a series of analytic solutions of the stationary state wave function and corresponding energy level structure by tight coupling among the coefficients of potential functions for the radial Schrödinger equation; and lastly, they discuss the solutions and make conclusions.
NASA Technical Reports Server (NTRS)
Kim, Sang-Wook
1987-01-01
Various experimental, analytical, and numerical analysis methods for flow-solid interaction of a nest of cylinders subjected to cross flows are reviewed. A nest of cylinders subjected to cross flows can be found in numerous engineering applications including the Space Shuttle Maine Engine-Main Injector Assembly (SSME-MIA) and nuclear reactor heat exchangers. Despite its extreme importance in engineering applications, understanding of the flow-solid interaction process is quite limited and design of the tube banks are mostly dependent on experiments and/or experimental correlation equations. For future development of major numerical analysis methods for the flow-solid interaction of a nest of cylinders subjected to cross flow, various turbulence models, nonlinear structural dynamics, and existing laminar flow-solid interaction analysis methods are included.
NASA Astrophysics Data System (ADS)
Amerian, Z.; Salem, M. K.; Salar Elahi, A.; Ghoranneviss, M.
2017-03-01
Equilibrium reconstruction consists of identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. Numerous methods exist to solve the Grad-Shafranov equation, describing the equilibrium of plasma confined by an axisymmetric magnetic field. In this paper, we have proposed a new numerical solution to the Grad-Shafranov equation (an axisymmetric, magnetic field transformed in cylindrical coordinates solved with the Chebyshev collocation method) when the source term (current density function) on the right-hand side is linear. The Chebyshev collocation method is a method for computing highly accurate numerical solutions of differential equations. We describe a circular cross-section of the tokamak and present numerical result of magnetic surfaces on the IR-T1 tokamak and then compare the results with an analytical solution.
Satagopam, Venkata; Gu, Wei; Eifes, Serge; Gawron, Piotr; Ostaszewski, Marek; Gebel, Stephan; Barbosa-Silva, Adriano; Balling, Rudi; Schneider, Reinhard
2016-01-01
Abstract Translational medicine is a domain turning results of basic life science research into new tools and methods in a clinical environment, for example, as new diagnostics or therapies. Nowadays, the process of translation is supported by large amounts of heterogeneous data ranging from medical data to a whole range of -omics data. It is not only a great opportunity but also a great challenge, as translational medicine big data is difficult to integrate and analyze, and requires the involvement of biomedical experts for the data processing. We show here that visualization and interoperable workflows, combining multiple complex steps, can address at least parts of the challenge. In this article, we present an integrated workflow for exploring, analysis, and interpretation of translational medicine data in the context of human health. Three Web services—tranSMART, a Galaxy Server, and a MINERVA platform—are combined into one big data pipeline. Native visualization capabilities enable the biomedical experts to get a comprehensive overview and control over separate steps of the workflow. The capabilities of tranSMART enable a flexible filtering of multidimensional integrated data sets to create subsets suitable for downstream processing. A Galaxy Server offers visually aided construction of analytical pipelines, with the use of existing or custom components. A MINERVA platform supports the exploration of health and disease-related mechanisms in a contextualized analytical visualization system. We demonstrate the utility of our workflow by illustrating its subsequent steps using an existing data set, for which we propose a filtering scheme, an analytical pipeline, and a corresponding visualization of analytical results. The workflow is available as a sandbox environment, where readers can work with the described setup themselves. Overall, our work shows how visualization and interfacing of big data processing services facilitate exploration, analysis, and interpretation of translational medicine data. PMID:27441714
Systematic Review of Model-Based Economic Evaluations of Treatments for Alzheimer's Disease.
Hernandez, Luis; Ozen, Asli; DosSantos, Rodrigo; Getsios, Denis
2016-07-01
Numerous economic evaluations using decision-analytic models have assessed the cost effectiveness of treatments for Alzheimer's disease (AD) in the last two decades. It is important to understand the methods used in the existing models of AD and how they could impact results, as they could inform new model-based economic evaluations of treatments for AD. The aim of this systematic review was to provide a detailed description on the relevant aspects and components of existing decision-analytic models of AD, identifying areas for improvement and future development, and to conduct a quality assessment of the included studies. We performed a systematic and comprehensive review of cost-effectiveness studies of pharmacological treatments for AD published in the last decade (January 2005 to February 2015) that used decision-analytic models, also including studies considering patients with mild cognitive impairment (MCI). The background information of the included studies and specific information on the decision-analytic models, including their approach and components, assumptions, data sources, analyses, and results, were obtained from each study. A description of how the modeling approaches and assumptions differ across studies, identifying areas for improvement and future development, is provided. At the end, we present our own view of the potential future directions of decision-analytic models of AD and the challenges they might face. The included studies present a variety of different approaches, assumptions, and scope of decision-analytic models used in the economic evaluation of pharmacological treatments of AD. The major areas for improvement in future models of AD are to include domains of cognition, function, and behavior, rather than cognition alone; include a detailed description of how data used to model the natural course of disease progression were derived; state and justify the economic model selected and structural assumptions and limitations; provide a detailed (rather than high-level) description of the cost components included in the model; and report on the face-, internal-, and cross-validity of the model to strengthen the credibility and confidence in model results. The quality scores of most studies were rated as fair to good (average 87.5, range 69.5-100, in a scale of 0-100). Despite the advancements in decision-analytic models of AD, there remain several areas of improvement that are necessary to more appropriately and realistically capture the broad nature of AD and the potential benefits of treatments in future models of AD.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
Mudge, Elizabeth; Paley, Lori; Schieber, Andreas; Brown, Paula N
2015-10-01
Seeds of milk thistle, Silybum marianum (L.) Gaertn., are used for treatment and prevention of liver disorders and were identified as a high priority ingredient requiring a validated analytical method. An AOAC International expert panel reviewed existing methods and made recommendations concerning method optimization prior to validation. A series of extraction and separation studies were undertaken on the selected method for determining flavonolignans from milk thistle seeds and finished products to address the review panel recommendations. Once optimized, a single-laboratory validation study was conducted. The method was assessed for repeatability, accuracy, selectivity, LOD, LOQ, analyte stability, and linearity. Flavonolignan content ranged from 1.40 to 52.86% in raw materials and dry finished products and ranged from 36.16 to 1570.7 μg/mL in liquid tinctures. Repeatability for the individual flavonolignans in raw materials and finished products ranged from 1.03 to 9.88% RSDr, with HorRat values between 0.21 and 1.55. Calibration curves for all flavonolignan concentrations had correlation coefficients of >99.8%. The LODs for the flavonolignans ranged from 0.20 to 0.48 μg/mL at 288 nm. Based on the results of this single-laboratory validation, this method is suitable for the quantitation of the six major flavonolignans in milk thistle raw materials and finished products, as well as multicomponent products containing dandelion, schizandra berry, and artichoke extracts. It is recommended that this method be adopted as First Action Official Method status by AOAC International.
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
77 FR 41336 - Analytical Methods Used in Periodic Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-13
... Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission. ACTION: Notice of filing. SUMMARY... proceeding to consider changes in analytical methods used in periodic reporting. This notice addresses... informal rulemaking proceeding to consider changes in the analytical methods approved for use in periodic...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
DOT National Transportation Integrated Search
1974-10-01
The author has brought the review of published analytical methods for determining alcohol in body materials up-to- date. The review deals with analytical methods for alcohol in blood and other body fluids and tissues; breath alcohol methods; factors ...
Stillhart, Cordula; Kuentz, Martin
2012-02-05
Self-emulsifying drug delivery systems (SEDDS) are complex mixtures in which drug quantification can become a challenging task. Thus, a general need exists for novel analytical methods and a particular interest lies in techniques with the potential for process monitoring. This article compares Raman spectroscopy with high-resolution ultrasonic resonator technology (URT) for drug quantification in SEDDS. The model drugs fenofibrate, indomethacin, and probucol were quantitatively assayed in different self-emulsifying formulations. We measured ultrasound velocity and attenuation in the bulk formulation containing drug at different concentrations. The formulations were also studied by Raman spectroscopy. We used both, an in-line immersion probe for the bulk formulation and a multi-fiber sensor for measuring through hard-gelatin capsules that were filled with SEDDS. Each method was assessed by calculating the relative standard error of prediction (RSEP) as well as the limit of quantification (LOQ) and the mean recovery. Raman spectroscopy led to excellent calibration models for the bulk formulation as well as the capsules. The RSEP depended on the SEDDS type with values of 1.5-3.8%, while LOQ was between 0.04 and 0.35% (w/w) for drug quantification in the bulk. Similarly, the analysis of the capsules led to RSEP of 1.9-6.5% and LOQ of 0.01-0.41% (w/w). On the other hand, ultrasound attenuation resulted in RSEP of 2.3-4.4% and LOQ of 0.1-0.6% (w/w). Moreover, ultrasound velocity provided an interesting analytical response in cases where the drug strongly affected the density or compressibility of the SEDDS. We conclude that ultrasonic resonator technology and Raman spectroscopy constitute suitable methods for drug quantification in SEDDS, which is promising for their use as process analytical technologies. Copyright © 2011 Elsevier B.V. All rights reserved.
New analytical solutions to the two-phase water faucet problem
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-06-17
Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less
Existence Regions of Shock Wave Triple Configurations
ERIC Educational Resources Information Center
Bulat, Pavel V.; Chernyshev, Mikhail V.
2016-01-01
The aim of the research is to create the classification for shock wave triple configurations and their existence regions of various types: type 1, type 2, type 3. Analytical solutions for limit Mach numbers and passing shock intensity that define existence region of every type of triple configuration have been acquired. The ratios that conjugate…
Mass-flow-rate-controlled fluid flow in nanochannels by particle insertion and deletion.
Barclay, Paul L; Lukes, Jennifer R
2016-12-01
A nonequilibrium molecular dynamics method to induce fluid flow in nanochannels, the insertion-deletion method (IDM), is introduced. IDM inserts and deletes particles within distinct regions in the domain, creating locally high and low pressures. The benefits of IDM are that it directly controls a physically meaningful quantity, the mass flow rate, allows for pressure and density gradients to develop in the direction of flow, and permits treatment of complex aperiodic geometries. Validation of IDM is performed, yielding good agreement with the analytical solution of Poiseuille flow in a planar channel. Comparison of IDM to existing methods indicates that it is best suited for gases, both because it intrinsically accounts for compressibility effects on the flow and because the computational cost of particle insertion is lowest for low-density fluids.
Salvador, Arnaud; Dubreuil, Didier; Denouel, Jannick; Millerioux, L
2005-06-25
A sensitive LC-MS-MS assay for the quantitative determination of bromocriptine has been developed and validated and is described in this work. The assay involved the extraction of the analyte from 1 ml of human plasma using a solid phase extraction on Oasis MCX cartridges. Chromatography was performed on a Symmetry C18 (2.1 mm x 100 mm, 3.5 microm) column using a mobile phase consisting of 25:75:01 acetonitrile-water-formic acid with a flow rate of 250 microl/min. The linearity was within the concentration range of 2-500 pg/ml. The lower limit of quantification was 2 pg/ml. This method has been demonstrated to be an improvement over existing methods due to its greater sensitivity and specificity.
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrie, Michael; Shadwick, B. A.
2016-01-04
Here, we present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Juttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviors that do not exist in the non relativistic case.more » The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.« less
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org
2016-01-15
We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numericalmore » study of the relativistic two-stream instability completes the set of benchmarking tests.« less
Automatic high-throughput screening of colloidal crystals using machine learning
NASA Astrophysics Data System (ADS)
Spellings, Matthew; Glotzer, Sharon C.
Recent improvements in hardware and software have united to pose an interesting problem for computational scientists studying self-assembly of particles into crystal structures: while studies covering large swathes of parameter space can be dispatched at once using modern supercomputers and parallel architectures, identifying the different regions of a phase diagram is often a serial task completed by hand. While analytic methods exist to distinguish some simple structures, they can be difficult to apply, and automatic identification of more complex structures is still lacking. In this talk we describe one method to create numerical ``fingerprints'' of local order and use them to analyze a study of complex ordered structures. We can use these methods as first steps toward automatic exploration of parameter space and, more broadly, the strategic design of new materials.
Mono-isotope Prediction for Mass Spectra Using Bayes Network.
Li, Hui; Liu, Chunmei; Rwebangira, Mugizi Robert; Burge, Legand
2014-12-01
Mass spectrometry is one of the widely utilized important methods to study protein functions and components. The challenge of mono-isotope pattern recognition from large scale protein mass spectral data needs computational algorithms and tools to speed up the analysis and improve the analytic results. We utilized naïve Bayes network as the classifier with the assumption that the selected features are independent to predict mono-isotope pattern from mass spectrometry. Mono-isotopes detected from validated theoretical spectra were used as prior information in the Bayes method. Three main features extracted from the dataset were employed as independent variables in our model. The application of the proposed algorithm to publicMo dataset demonstrates that our naïve Bayes classifier is advantageous over existing methods in both accuracy and sensitivity.
Diving deeper into Zebrafish development of social behavior: analyzing high resolution data.
Buske, Christine; Gerlai, Robert
2014-08-30
Vertebrate model organisms have been utilized in high throughput screening but only with substantial cost and human capital investment. The zebrafish is a vertebrate model species that is a promising and cost effective candidate for efficient high throughput screening. Larval zebrafish have already been successfully employed in this regard (Lessman, 2011), but adult zebrafish also show great promise. High throughput screening requires the use of a large number of subjects and collection of substantial amount of data. Collection of data is only one of the demanding aspects of screening. However, in most screening approaches that involve behavioral data the main bottleneck that slows throughput is the time consuming aspect of analysis of the collected data. Some automated analytical tools do exist, but often they only work for one subject at a time, eliminating the possibility of fully utilizing zebrafish as a screening tool. This is a particularly important limitation for such complex phenotypes as social behavior. Testing multiple fish at a time can reveal complex social interactions but it may also allow the identification of outliers from a group of mutagenized or pharmacologically treated fish. Here, we describe a novel method using a custom software tool developed within our laboratory, which enables tracking multiple fish, in combination with a sophisticated analytical approach for summarizing and analyzing high resolution behavioral data. This paper focuses on the latter, the analytic tool, which we have developed using the R programming language and environment for statistical computing. We argue that combining sophisticated data collection methods with appropriate analytical tools will propel zebrafish into the future of neurobehavioral genetic research. Copyright © 2014. Published by Elsevier B.V.
Flakelar, Clare L; Prenzler, Paul D; Luckett, David J; Howitt, Julia A; Doran, Gregory
2017-01-01
A normal phase high performance liquid chromatography (HPLC) method was developed to simultaneously quantify several prominent bioactive compounds in canola oil vis. α-tocopherol, γ-tocopherol, δ-tocopherol, β-carotene, lutein, β-sitosterol, campesterol and brassicasterol. The use of sequential diode array detection (DAD) and tandem mass spectrometry (MS/MS) allowed direct injection of oils, diluted in hexane without derivatisation or saponification, greatly reducing sample preparation time, and permitting the quantification of both free sterols and intact sterol esters. Further advantages over existing methods included increased analytical selectivity, and a chromatographic run time substantially less than other reported normal phase methods. The HPLC-DAD-MS/MS method was applied to freshly extracted canola oil samples as well as commercially available canola, palm fruit, sunflower and olive oils. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Peng, Chong; Wang, Lun; Liao, T. Warren
2015-10-01
Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.
Fuzzy multicriteria disposal method and site selection for municipal solid waste.
Ekmekçioğlu, Mehmet; Kaya, Tolga; Kahraman, Cengiz
2010-01-01
The use of fuzzy multiple criteria analysis (MCA) in solid waste management has the advantage of rendering subjective and implicit decision making more objective and analytical, with its ability to accommodate both quantitative and qualitative data. In this paper a modified fuzzy TOPSIS methodology is proposed for the selection of appropriate disposal method and site for municipal solid waste (MSW). Our method is superior to existing methods since it has capability of representing vague qualitative data and presenting all possible results with different degrees of membership. In the first stage of the proposed methodology, a set of criteria of cost, reliability, feasibility, pollution and emission levels, waste and energy recovery is optimized to determine the best MSW disposal method. Landfilling, composting, conventional incineration, and refuse-derived fuel (RDF) combustion are the alternatives considered. The weights of the selection criteria are determined by fuzzy pairwise comparison matrices of Analytic Hierarchy Process (AHP). It is found that RDF combustion is the best disposal method alternative for Istanbul. In the second stage, the same methodology is used to determine the optimum RDF combustion plant location using adjacent land use, climate, road access and cost as the criteria. The results of this study illustrate the importance of the weights on the various factors in deciding the optimized location, with the best site located in Catalca. A sensitivity analysis is also conducted to monitor how sensitive our model is to changes in the various criteria weights. 2010 Elsevier Ltd. All rights reserved.
Accounting for differences in the bioactivity and bioavailability of vitamers
Gregory, Jesse F.
2012-01-01
Essentially all vitamins exist with multiple nutritionally active chemical species often called vitamers. Our quantitative understanding of the bioactivity and bioavailability of the various members of each vitamin family has increased markedly, but many issues remain to be resolved concerning the reporting and use of analytical data. Modern methods of vitamin analysis rely heavily on chromatographic techniques that generally allow the measurement of the individual chemical forms of vitamins. Typical applications of food analysis include the evaluation of shelf life and storage stability, monitoring of nutrient retention during food processing, developing food composition databases and data needed for food labeling, assessing dietary adequacy and evaluating epidemiological relationships between diet and disease. Although the usage of analytical data varies depending on the situation, important issues regarding how best to present and interpret the data in light of the presence of multiple vitamers are common to all aspects of food analysis. In this review, we will evaluate the existence of vitamers that exhibit differences in bioactivity or bioavailability, consider when there is a need to address differences in bioactivity or bioavailability of vitamers, and then consider alternative approaches and possible ways to improve the reporting of data. Major examples are taken from literature and experience with vitamin B6 and folate. PMID:22489223
Study of tethered satellite active attitude control
NASA Technical Reports Server (NTRS)
Colombo, G.
1982-01-01
Existing software was adapted for the study of tethered subsatellite rotational dynamics, an analytic solution for a stable configuration of a tethered subsatellite was developed, the analytic and numerical integrator (computer) solutions for this "test case' was compared in a two mass tether model program (DUMBEL), the existing multiple mass tether model (SKYHOOK) was modified to include subsatellite rotational dynamics, the analytic "test case,' was verified, and the use of the SKYHOOK rotational dynamics capability with a computer run showing the effect of a single off axis thruster on the behavior of the subsatellite was demonstrated. Subroutines for specific attitude control systems are developed and applied to the study of the behavior of the tethered subsatellite under realistic on orbit conditions. The effect of all tether "inputs,' including pendular oscillations, air drag, and electrodynamic interactions, on the dynamic behavior of the tether are included.
NASA Astrophysics Data System (ADS)
Werner, Adrian D.; Robinson, Neville I.
2018-06-01
Existing analytical solutions for the distribution of fresh groundwater in subsea aquifers presume that the overlying offshore aquitard, represented implicitly, contains seawater. Here, we consider the case where offshore fresh groundwater is the result of freshwater discharge from onshore aquifers, and neglect paleo-freshwater sources. A recent numerical modeling investigation, involving explicit simulation of the offshore aquitard, demonstrates that offshore aquitards more likely contain freshwater in areas of upward freshwater leakage to the sea. We integrate this finding into the existing analytical solutions by providing an alternative formulation for steady interface flow in subsea aquifers, whereby the salinity in the offshore aquitard can be chosen. The new solution, taking the aquitard salinity as that of freshwater, provides a closer match to numerical modeling results in which the aquitard is represented explicitly.
Traveling solitons in long-range oscillator chains
NASA Astrophysics Data System (ADS)
Miloshevich, George; Nguenang, Jean Pierre; Dauxois, Thierry; Khomeriki, Ramaz; Ruffo, Stefano
2017-03-01
We investigate the existence and propagation of solitons in a long-range extension of the quartic Fermi-Pasta-Ulam (FPU) chain of anharmonic oscillators. The coupling in the linear term decays as a power-law with an exponent 1<α ≤slant 3 . We obtain an analytic perturbative expression of traveling envelope solitons by introducing a non linear Schrödinger equation for the slowly varying amplitude of short wavelength modes. Due to the non analytic properties of the dispersion relation, it is crucial to develop the theory using discrete difference operators. Those properties are also the ultimate reason why kink-solitons may exist but are unstable, at variance with the short-range FPU model. We successfully compare these approximate analytic results with numerical simulations for the value α =2 which was chosen as a case study.
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...