Sample records for analytical method called

  1. Probabilistic assessment methodology for continuous-type petroleum accumulations

    USGS Publications Warehouse

    Crovelli, R.A.

    2003-01-01

    The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.

  2. Net analyte signal standard addition method (NASSAM) as a novel spectrofluorimetric and spectrophotometric technique for simultaneous determination, application to assay of melatonin and pyridoxine

    NASA Astrophysics Data System (ADS)

    Asadpour-Zeynali, Karim; Bastami, Mohammad

    2010-02-01

    In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.

  3. Analysis of a virtual memory model for maintaining database views

    NASA Technical Reports Server (NTRS)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  4. [The subject matters concerned with use of simplified analytical systems from the perspective of the Japanese Association of Medical Technologists].

    PubMed

    Morishita, Y

    2001-05-01

    The subject matters concerned with use of so-called simplified analytical systems for the purpose of useful utilizing are mentioned from the perspective of a laboratory technician. 1. The data from simplified analytical systems should to be agreed with those of particular reference methods not to occur the discrepancy of the data from different laboratories. 2. Accuracy of the measured results using simplified analytical systems is hard to be scrutinized thoroughly and correctly with the quality control surveillance procedure on the stored pooled serum or partly-processed blood. 3. It is necessary to present the guide line to follow about the contents of evaluation to guarantee on quality of simplified analytical systems. 4. Maintenance and manual performance of simplified analytical systems have to be standardized by a laboratory technician and a selling agent technician. 5. It calls attention, further that the cost of simplified analytical systems is much expensive compared to that of routine method with liquid reagents. 6. Various substances in human serum, like cytokine, hormone, tumor marker, and vitamin, etc. are also hoped to be measured by simplified analytical systems.

  5. The Development of MST Test Information for the Prediction of Test Performances

    ERIC Educational Resources Information Center

    Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.

    2017-01-01

    The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…

  6. Finite-analytic numerical solution of heat transfer in two-dimensional cavity flow

    NASA Technical Reports Server (NTRS)

    Chen, C.-J.; Naseri-Neshat, H.; Ho, K.-S.

    1981-01-01

    Heat transfer in cavity flow is numerically analyzed by a new numerical method called the finite-analytic method. The basic idea of the finite-analytic method is the incorporation of local analytic solutions in the numerical solutions of linear or nonlinear partial differential equations. In the present investigation, the local analytic solutions for temperature, stream function, and vorticity distributions are derived. When the local analytic solution is evaluated at a given nodal point, it gives an algebraic relationship between a nodal value in a subregion and its neighboring nodal points. A system of algebraic equations is solved to provide the numerical solution of the problem. The finite-analytic method is used to solve heat transfer in the cavity flow at high Reynolds number (1000) for Prandtl numbers of 0.1, 1, and 10.

  7. Systems and Methods for Composable Analytics

    DTIC Science & Technology

    2014-04-29

    simplistic module that performs a mathematical operation on two numbers. The most important method is the Execute() method. This will get called when it is...addition, an input control is also specified in the example below. In this example, the mathematical operator can only be chosen from a preconfigured...approaches. Some of the industries that could benefit from Composable Analytics include pharmaceuticals, health care, insurance, actuaries , and

  8. ANALYTIC ELEMENT MODELING FOR SOURCE WATER ASSESSMENTS OF PUBLIC WATER SUPPLY WELLS: CASE STUDIES IN GLACIAL OUTWASH AND BASIN-AND-RANGE

    EPA Science Inventory

    Over the last 10 years the EPA has invested in analytic elements as a computational method used in public domain software supporting capture zone delineation for source water assessments and wellhead protection. The current release is called WhAEM2000 (wellhead analytic element ...

  9. Regarding on the prototype solutions for the nonlinear fractional-order biological population model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baskonus, Haci Mehmet, E-mail: hmbaskonus@gmail.com; Bulut, Hasan

    2016-06-08

    In this study, we have submitted to literature a method newly extended which is called as Improved Bernoulli sub-equation function method based on the Bernoulli Sub-ODE method. The proposed analytical scheme has been expressed with steps. We have obtained some new analytical solutions to the nonlinear fractional-order biological population model by using this technique. Two and three dimensional surfaces of analytical solutions have been drawn by wolfram Mathematica 9. Finally, a conclusion has been submitted by mentioning important acquisitions founded in this study.

  10. Analytical N beam position monitor method

    NASA Astrophysics Data System (ADS)

    Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.

    2017-11-01

    Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  11. The Influence of Judgment Calls on Meta-Analytic Findings.

    PubMed

    Tarrahi, Farid; Eisend, Martin

    2016-01-01

    Previous research has suggested that judgment calls (i.e., methodological choices made in the process of conducting a meta-analysis) have a strong influence on meta-analytic findings and question their robustness. However, prior research applies case study comparison or reanalysis of a few meta-analyses with a focus on a few selected judgment calls. These studies neglect the fact that different judgment calls are related to each other and simultaneously influence the outcomes of a meta-analysis, and that meta-analytic findings can vary due to non-judgment call differences between meta-analyses (e.g., variations of effects over time). The current study analyzes the influence of 13 judgment calls in 176 meta-analyses in marketing research by applying a multivariate, multilevel meta-meta-analysis. The analysis considers simultaneous influences from different judgment calls on meta-analytic effect sizes and controls for alternative explanations based on non-judgment call differences between meta-analyses. The findings suggest that judgment calls have only a minor influence on meta-analytic findings, whereas non-judgment call differences between meta-analyses are more likely to explain differences in meta-analytic findings. The findings support the robustness of meta-analytic results and conclusions.

  12. DEVELOPMENT AND VALIDATION OF ANALYTICAL METHODS FOR ENUMERATION OF FECAL INDICATORS AND EMERGING CHEMICAL CONTAMINANTS IN BIOSOLIDS

    EPA Science Inventory

    In 2002 the National Research Council (NRC) issued a report which identified a number of issues regarding biosolids land application practices and pointed out the need for improved and validated analytical techniques for regulated indicator organisms and pathogens. They also call...

  13. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  14. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    NASA Astrophysics Data System (ADS)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  15. Sampling and analysis for radon-222 dissolved in ground water and surface water

    USGS Publications Warehouse

    DeWayne, Cecil L.; Gesell, T.F.

    1992-01-01

    Radon-222 is a naturally occurring radioactive gas in the uranium-238 decay series that has traditionally been called, simply, radon. The lung cancer risks associated with the inhalation of radon decay products have been well documented by epidemiological studies on populations of uranium miners. The realization that radon is a public health hazard has raised the need for sampling and analytical guidelines for field personnel. Several sampling and analytical methods are being used to document radon concentrations in ground water and surface water worldwide but no convenient, single set of guidelines is available. Three different sampling and analytical methods - bubbler, liquid scintillation, and field screening - are discussed in this paper. The bubbler and liquid scintillation methods have high accuracy and precision, and small analytical method detection limits of 0.2 and 10 pCi/l (picocuries per liter), respectively. The field screening method generally is used as a qualitative reconnaissance tool.

  16. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  17. A new tool for the evaluation of the analytical procedure: Green Analytical Procedure Index.

    PubMed

    Płotka-Wasylka, J

    2018-05-01

    A new means for assessing analytical protocols relating to green analytical chemistry attributes has been developed. The new tool, called GAPI (Green Analytical Procedure Index), evaluates the green character of an entire analytical methodology, from sample collection to final determination, and was created using such tools as the National Environmental Methods Index (NEMI) or Analytical Eco-Scale to provide not only general but also qualitative information. In GAPI, a specific symbol with five pentagrams can be used to evaluate and quantify the environmental impact involved in each step of an analytical methodology, mainly from green through yellow to red depicting low, medium to high impact, respectively. The proposed tool was used to evaluate analytical procedures applied in the determination of biogenic amines in wine samples, and polycyclic aromatic hydrocarbon determination by EPA methods. GAPI tool not only provides an immediately perceptible perspective to the user/reader but also offers exhaustive information on evaluated procedures. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Application of the correlation constrained multivariate curve resolution alternating least-squares method for analyte quantitation in the presence of unexpected interferences using first-order instrumental data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà

    2010-03-01

    Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.

  19. On analytic design of loudspeaker arrays with uniform radiation characteristics

    PubMed

    Aarts; Janssen

    2000-01-01

    Some notes on analytical derived loudspeaker arrays with uniform radiation characteristics are presented. The array coefficients are derived via analytical means and compared with so-called maximal flat sequences known from telecommunications and information theory. It appears that the newly derived array, i.e., the quadratic phase array, has a higher efficiency than the Bessel array and a flatter response than the Barker array. The method discussed admits generalization to the design of arrays with desired nonuniform radiating characteristics.

  20. System and Method for Providing a Climate Data Analytic Services Application Programming Interface Distribution Package

    NASA Technical Reports Server (NTRS)

    Tamkin, Glenn S. (Inventor); Duffy, Daniel Q. (Inventor); Schnase, John L. (Inventor)

    2016-01-01

    A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.

  1. Piezocone Penetration Testing Device

    DOT National Transportation Integrated Search

    2017-01-03

    Hydraulic characteristics of soils can be estimated from piezocone penetration test (called PCPT hereinafter) by performing dissipation test or on-the-fly using advanced analytical techniques. This research report presents a method for fast estimatio...

  2. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)

    EPA Science Inventory

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...

  3. Analytical resource assessment method for continuous (unconventional) oil and gas accumulations - The "ACCESS" Method

    USGS Publications Warehouse

    Crovelli, Robert A.; revised by Charpentier, Ronald R.

    2012-01-01

    The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.

  4. Teaching Theory Construction With Initial Grounded Theory Tools: A Reflection on Lessons and Learning.

    PubMed

    Charmaz, Kathy

    2015-12-01

    This article addresses criticisms of qualitative research for spawning studies that lack analytic development and theoretical import. It focuses on teaching initial grounded theory tools while interviewing, coding, and writing memos for the purpose of scaling up the analytic level of students' research and advancing theory construction. Adopting these tools can improve teaching qualitative methods at all levels although doctoral education is emphasized here. What teachers cover in qualitative methods courses matters. The pedagogy presented here requires a supportive environment and relies on demonstration, collective participation, measured tasks, progressive analytic complexity, and accountability. Lessons learned from using initial grounded theory tools are exemplified in a doctoral student's coding and memo-writing excerpts that demonstrate progressive analytic development. The conclusion calls for increasing the number and depth of qualitative methods courses and for creating a cadre of expert qualitative methodologists. © The Author(s) 2015.

  5. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    PubMed

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  6. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  7. Authentic Oral Language Production and Interaction in CALL: An Evolving Conceptual Framework for the Use of Learning Analytics within the SpeakApps Project

    ERIC Educational Resources Information Center

    Nic Giolla Mhichíl, Mairéad; van Engen, Jeroen; Ó Ciardúbháin, Colm; Ó Cléircín, Gearóid; Appel, Christine

    2014-01-01

    This paper sets out to construct and present the evolving conceptual framework of the SpeakApps projects to consider the application of learning analytics to facilitate synchronous and asynchronous oral language skills within this CALL context. Drawing from both the CALL and wider theoretical and empirical literature of learner analytics, the…

  8. Study designs appropriate for the workplace.

    PubMed

    Hogue, C J

    1986-01-01

    Carlo and Hearn have called for "refinement of old [epidemiologic] methods and an ongoing evaluation of where methods fit in the overall scheme as we address the multiple complexities of reproductive hazard assessment." This review is an attempt to bring together the current state-of-the-art methods for problem definition and hypothesis testing available to the occupational epidemiologist. For problem definition, meta analysis can be utilized to narrow the field of potential causal hypotheses. Passive active surveillance may further refine issues for analytic research. Within analytic epidemiology, several methods may be appropriate for the workplace setting. Those discussed here may be used to estimate the risk ratio in either a fixed or dynamic population.

  9. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)

    EPA Science Inventory

    Abstract

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...

  10. Unlocking Proteomic Heterogeneity in Complex Diseases through Visual Analytics

    PubMed Central

    Bhavnani, Suresh K.; Dang, Bryant; Bellala, Gowtham; Divekar, Rohit; Visweswaran, Shyam; Brasier, Allan; Kurosky, Alex

    2015-01-01

    Despite years of preclinical development, biological interventions designed to treat complex diseases like asthma often fail in phase III clinical trials. These failures suggest that current methods to analyze biomedical data might be missing critical aspects of biological complexity such as the assumption that cases and controls come from homogeneous distributions. Here we discuss why and how methods from the rapidly evolving field of visual analytics can help translational teams (consisting of biologists, clinicians, and bioinformaticians) to address the challenge of modeling and inferring heterogeneity in the proteomic and phenotypic profiles of patients with complex diseases. Because a primary goal of visual analytics is to amplify the cognitive capacities of humans for detecting patterns in complex data, we begin with an overview of the cognitive foundations for the field of visual analytics. Next, we organize the primary ways in which a specific form of visual analytics called networks have been used to model and infer biological mechanisms, which help to identify the properties of networks that are particularly useful for the discovery and analysis of proteomic heterogeneity in complex diseases. We describe one such approach called subject-protein networks, and demonstrate its application on two proteomic datasets. This demonstration provides insights to help translational teams overcome theoretical, practical, and pedagogical hurdles for the widespread use of subject-protein networks for analyzing molecular heterogeneities, with the translational goal of designing biomarker-based clinical trials, and accelerating the development of personalized approaches to medicine. PMID:25684269

  11. Problem-based learning on quantitative analytical chemistry course

    NASA Astrophysics Data System (ADS)

    Fitri, Noor

    2017-12-01

    This research applies problem-based learning method on chemical quantitative analytical chemistry, so called as "Analytical Chemistry II" course, especially related to essential oil analysis. The learning outcomes of this course include aspects of understanding of lectures, the skills of applying course materials, and the ability to identify, formulate and solve chemical analysis problems. The role of study groups is quite important in improving students' learning ability and in completing independent tasks and group tasks. Thus, students are not only aware of the basic concepts of Analytical Chemistry II, but also able to understand and apply analytical concepts that have been studied to solve given analytical chemistry problems, and have the attitude and ability to work together to solve the problems. Based on the learning outcome, it can be concluded that the problem-based learning method in Analytical Chemistry II course has been proven to improve students' knowledge, skill, ability and attitude. Students are not only skilled at solving problems in analytical chemistry especially in essential oil analysis in accordance with local genius of Chemistry Department, Universitas Islam Indonesia, but also have skilled work with computer program and able to understand material and problem in English.

  12. Synthesized airfoil data method for prediction of dynamic stall and unsteady airloads

    NASA Technical Reports Server (NTRS)

    Gangwani, S. T.

    1983-01-01

    A detailed analysis of dynamic stall experiments has led to a set of relatively compact analytical expressions, called synthesized unsteady airfoil data, which accurately describe in the time-domain the unsteady aerodynamic characteristics of stalled airfoils. An analytical research program was conducted to expand and improve this synthesized unsteady airfoil data method using additional available sets of unsteady airfoil data. The primary objectives were to reduce these data to synthesized form for use in rotor airload prediction analyses and to generalize the results. Unsteady drag data were synthesized which provided the basis for successful expansion of the formulation to include computation of the unsteady pressure drag of airfoils and rotor blades. Also, an improved prediction model for airfoil flow reattachment was incorporated in the method. Application of this improved unsteady aerodynamics model has resulted in an improved correlation between analytic predictions and measured full scale helicopter blade loads and stress data.

  13. Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide

    2017-04-01

    Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.

  14. U.S. Geological Survey Standard Reference Sample Project: Performance Evaluation of Analytical Laboratories

    USGS Publications Warehouse

    Long, H. Keith; Daddow, Richard L.; Farrar, Jerry W.

    1998-01-01

    Since 1962, the U.S. Geological Survey (USGS) has operated the Standard Reference Sample Project to evaluate the performance of USGS, cooperator, and contractor analytical laboratories that analyze chemical constituents of environmental samples. The laboratories are evaluated by using performance evaluation samples, called Standard Reference Samples (SRSs). SRSs are submitted to laboratories semi-annually for round-robin laboratory performance comparison purposes. Currently, approximately 100 laboratories are evaluated for their analytical performance on six SRSs for inorganic and nutrient constituents. As part of the SRS Project, a surplus of homogeneous, stable SRSs is maintained for purchase by USGS offices and participating laboratories for use in continuing quality-assurance and quality-control activities. Statistical evaluation of the laboratories results provides information to compare the analytical performance of the laboratories and to determine possible analytical deficiences and problems. SRS results also provide information on the bias and variability of different analytical methods used in the SRS analyses.

  15. Which helper behaviors and intervention styles are related to better short-term outcomes in telephone crisis intervention? Results from a Silent Monitoring Study of Calls to the U.S. 1-800-SUICIDE Network.

    PubMed

    Mishara, Brian L; Chagnon, François; Daigle, Marc; Balan, Bogdan; Raymond, Sylvaine; Marcoux, Isabelle; Bardon, Cécile; Campbell, Julie K; Berman, Alan

    2007-06-01

    A total of 2,611 calls to 14 helplines were monitored to observe helper behaviors and caller characteristics and changes during the calls. The relationship between intervention characteristics and call outcomes are reported for 1,431 crisis calls. Empathy and respect, as well as factor-analytically derived scales of supportive approach and good contact and collaborative problem solving were significantly related to positive outcomes, but not active listening. We recommend recruitment of helpers with these characteristics, development of standardized training in those methods that are empirically shown to be effective, and the need for research relating short-term outcomes to long-term effects.

  16. Building analytical three-field cosmological models

    NASA Astrophysics Data System (ADS)

    Santos, J. R. L.; Moraes, P. H. R. S.; Ferreira, D. A.; Neta, D. C. Vilar

    2018-02-01

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called "extension method". The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters.

  17. Seeking Information with an Information Visualization System: A Study of Cognitive Styles

    ERIC Educational Resources Information Center

    Yuan, Xiaojun; Zhang, Xiangman; Chen, Chaomei; Avery, Joshua M.

    2011-01-01

    Introduction: This study investigated the effect of cognitive styles on users' information-seeking task performance using a knowledge domain information visualization system called CiteSpace. Method: Sixteen graduate students participated in a user experiment. Each completed an extended cognitive style analysis wholistic-analytic test (the…

  18. Against Simplicity, against Ethics: Analytics of Disruption as Quasi-Methodology

    ERIC Educational Resources Information Center

    Childers, Sara M.

    2012-01-01

    Simplified understandings of qualitative inquiry as mere method overlook the complexity and nuance of qualitative practice. As is the call of this special issue, the author intervenes in the simplification of qualitative inquiry through a discussion of methodology to illustrate how theory and inquiry are inextricably linked and ethically…

  19. Education Research as Analytic Claims: The Case of Mathematics

    ERIC Educational Resources Information Center

    Hyslop-Margison, Emery; Rogers, Matthew; Oladi, Soudeh

    2017-01-01

    Despite widespread calls for evidence-based research in education, this strategy has heretofore generated a surprisingly small return on the related financial investment. Some scholars have suggested that the situation follows from a mismatch between education as an assumed field of study and applied empirical research methods. This article's…

  20. Chemical imaging of secondary cell wall development in cotton fibers using a mid-infrared focal-plane array detector

    USDA-ARS?s Scientific Manuscript database

    Market demands for cotton varieties with improved fiber properties also call for the development of fast, reliable analytical methods for monitoring fiber development and measuring their properties. Currently, cotton breeders rely on instrumentation that can require significant amounts of sample, w...

  1. Generation of dark hollow beams by using a fractional radial Hilbert transform system

    NASA Astrophysics Data System (ADS)

    Xie, Qiansen; Zhao, Daomu

    2007-07-01

    The radial Hilbert transform has been extend to the fractional field, which could be called the fractional radial Hilbert transform (FRHT). Using edge-enhancement characteristics of this transform, we convert a Gaussian light beam into a variety of dark hollow beams (DHBs). Based on the fact that a hard-edged aperture can be expanded approximately as a finite sum of complex Gaussian functions, the analytical expression of a Gaussian beam passing through a FRHT system has been derived. As a numerical example, the properties of the DHBs with different fractional orders are illustrated graphically. The calculation results obtained by use of the analytical method and the integral method are also compared.

  2. Load sharing in distributed real-time systems with state-change broadcasts

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Chang, Yi-Chieh

    1989-01-01

    A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.

  3. Visual analytics as a translational cognitive science.

    PubMed

    Fisher, Brian; Green, Tera Marie; Arias-Hernández, Richard

    2011-07-01

    Visual analytics is a new interdisciplinary field of study that calls for a more structured scientific approach to understanding the effects of interaction with complex graphical displays on human cognitive processes. Its primary goal is to support the design and evaluation of graphical information systems that better support cognitive processes in areas as diverse as scientific research and emergency management. The methodologies that make up this new field are as yet ill defined. This paper proposes a pathway for development of visual analytics as a translational cognitive science that bridges fundamental research in human/computer cognitive systems and design and evaluation of information systems in situ. Achieving this goal will require the development of enhanced field methods for conceptual decomposition of human/computer cognitive systems that maps onto laboratory studies, and improved methods for conducting laboratory investigations that might better map onto real-world cognitive processes in technology-rich environments. Copyright © 2011 Cognitive Science Society, Inc.

  4. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    ERIC Educational Resources Information Center

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  5. 40 CFR 141.40 - Monitoring requirements for unregulated contaminants.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Contaminant List, in paragraph (a)(3) of this section. EPA will provide sample containers, provide pre-paid... Testing to be Sampled After Notice of Analytical Methods Availability] 1—Contaminant 2—CAS registry number... Records Administration (NARA). For information on availability of this material at NARA, call 202-741-6030...

  6. 40 CFR 141.402 - Ground water source microbial monitoring and analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the medium is set forth in the article “Evaluation of Enterolert for Enumeration of Enterococci in...). For information on the availability of this material at NARA, call 202-741-6030, or go to: http://www... American Public Health Association, 1015 Fifteenth Street, NW., Washington, DC 20005-2605. 3 Medium is...

  7. Evaluation of analytical performance of a new high-sensitivity immunoassay for cardiac troponin I.

    PubMed

    Masotti, Silvia; Prontera, Concetta; Musetti, Veronica; Storti, Simona; Ndreu, Rudina; Zucchelli, Gian Carlo; Passino, Claudio; Clerico, Aldo

    2018-02-23

    The study aim was to evaluate and compare the analytical performance of the new chemiluminescent immunoassay for cardiac troponin I (cTnI), called Access hs-TnI using DxI platform, with those of Access AccuTnI+3 method, and high-sensitivity (hs) cTnI method for ARCHITECT platform. The limits of blank (LoB), detection (LoD) and quantitation (LoQ) at 10% and 20% CV were evaluated according to international standardized protocols. For the evaluation of analytical performance and comparison of cTnI results, both heparinized plasma samples, collected from healthy subjects and patients with cardiac diseases, and quality control samples distributed in external quality assessment programs were used. LoB, LoD and LoQ at 20% and 10% CV values of the Access hs-cTnI method were 0.6, 1.3, 2.1 and 5.3 ng/L, respectively. Access hs-cTnI method showed analytical performance significantly better than that of Access AccuTnI+3 method and similar results to those of hs ARCHITECT cTnI method. Moreover, the cTnI concentrations measured with Access hs-cTnI method showed close linear regressions with both Access AccuTnI+3 and ARCHITECT hs-cTnI methods, although there were systematic differences between these methods. There was no difference between cTnI values measured by Access hs-cTnI in heparinized plasma and serum samples, whereas there was a significant difference between cTnI values, respectively measured in EDTA and heparin plasma samples. Access hs-cTnI has analytical sensitivity parameters significantly improved compared to Access AccuTnI+3 method and is similar to those of the high-sensitivity method using ARCHITECT platform.

  8. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2009-02-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.

  9. Net analyte signal-based simultaneous determination of ethanol and water by quartz crystal nanobalance sensor.

    PubMed

    Mirmohseni, A; Abdollahi, H; Rostamizadeh, K

    2007-02-28

    Net analyte signal (NAS)-based method called HLA/GO was applied for the selectively determination of binary mixture of ethanol and water by quartz crystal nanobalance (QCN) sensor. A full factorial design was applied for the formation of calibration and prediction sets in the concentration ranges 5.5-22.2 microg mL(-1) for ethanol and 7.01-28.07 microg mL(-1) for water. An optimal time range was selected by procedure which was based on the calculation of the net analyte signal regression plot in any considered time window for each test sample. A moving window strategy was used for searching the region with maximum linearity of NAS regression plot (minimum error indicator) and minimum of PRESS value. On the base of obtained results, the differences on the adsorption profiles in the time range between 1 and 600 s were used to determine mixtures of both compounds by HLA/GO method. The calculation of the net analytical signal using HLA/GO method allows determination of several figures of merit like selectivity, sensitivity, analytical sensitivity and limit of detection, for each component. To check the ability of the proposed method in the selection of linear regions of adsorption profile, a test for detecting non-linear regions of adsorption profile data in the presence of methanol was also described. The results showed that the method was successfully applied for the determination of ethanol and water.

  10. A numerical test of the topographic bias

    NASA Astrophysics Data System (ADS)

    Sjöberg, L. E.; Joud, M. S. S.

    2018-02-01

    In 1962 A. Bjerhammar introduced the method of analytical continuation in physical geodesy, implying that surface gravity anomalies are downward continued into the topographic masses down to an internal sphere (the Bjerhammar sphere). The method also includes analytical upward continuation of the potential to the surface of the Earth to obtain the quasigeoid. One can show that also the common remove-compute-restore technique for geoid determination includes an analytical continuation as long as the complete density distribution of the topography is not known. The analytical continuation implies that the downward continued gravity anomaly and/or potential are/is in error by the so-called topographic bias, which was postulated by a simple formula of L E Sjöberg in 2007. Here we will numerically test the postulated formula by comparing it with the bias obtained by analytical downward continuation of the external potential of a homogeneous ellipsoid to an inner sphere. The result shows that the postulated formula holds: At the equator of the ellipsoid, where the external potential is downward continued 21 km, the computed and postulated topographic biases agree to less than a millimetre (when the potential is scaled to the unit of metre).

  11. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  12. GraphPrints: Towards a Graph Analytic Method for Network Anomaly Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harshaw, Chris R; Bridges, Robert A; Iannacone, Michael D

    This paper introduces a novel graph-analytic approach for detecting anomalies in network flow data called \\textit{GraphPrints}. Building on foundational network-mining techniques, our method represents time slices of traffic as a graph, then counts graphlets\\textemdash small induced subgraphs that describe local topology. By performing outlier detection on the sequence of graphlet counts, anomalous intervals of traffic are identified, and furthermore, individual IPs experiencing abnormal behavior are singled-out. Initial testing of GraphPrints is performed on real network data with an implanted anomaly. Evaluation shows false positive rates bounded by 2.84\\% at the time-interval level, and 0.05\\% at the IP-level with 100\\% truemore » positive rates at both.« less

  13. Prioritizing preferable locations for increasing urban tree canopy in New York City

    Treesearch

    Dexter Locke; J. Morgan Grove; Jacqueline W.T. Lu; Austin Troy; Jarlath P.M. O' Neil-Dunne; Brian Beck

    2010-01-01

    This paper presents a set of Geographic Information System (GIS) methods for identifying and prioritizing tree planting sites in urban environments. It uses an analytical approach created by a University of Vermont service-learning class called "GIS Analysis of New York City's Ecology" that was designed to provide research support to the MillionTreesNYC...

  14. Day School Israel Education in the Age of Birthright

    ERIC Educational Resources Information Center

    Pomson, Alex; Deitcher, Howard

    2010-01-01

    What are North American Jewish day schools doing when they engage in Israel education, what shapes their practices, and to what ends? In this article, we report on a multi-method study inspired by these questions. Our account is organized around an analytical model that helps distinguish between what we call the vehicles, intensifiers, and…

  15. Business Analytics in the Marketing Curriculum: A Call for Integration

    ERIC Educational Resources Information Center

    Mintu-Wimsatt, Alma; Lozada, Héctor R.

    2018-01-01

    Marketing education has responded, to some extent, to the academic challenges emanating from the Big Data revolution. To provide a forum to specifically discuss how business analytics has been integrated into the marketing curriculum, we developed a Special Issue for "Marketing Education Review." We start with a call to action that…

  16. Flight Test Experiment Design for Characterizing Stability and Control of Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2008-01-01

    A maneuver design method that is particularly well-suited for determining the stability and control characteristics of hypersonic vehicles is described in detail. Analytical properties of the maneuver design are explained. The importance of these analytical properties for maximizing information content in flight data is discussed, along with practical implementation issues. Results from flight tests of the X-43A hypersonic research vehicle (also called Hyper-X) are used to demonstrate the excellent modeling results obtained using this maneuver design approach. A detailed design procedure for generating the maneuvers is given to allow application to other flight test programs.

  17. Framework for event-based semidistributed modeling that unifies the SCS-CN method, VIC, PDM, and TOPMODEL

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.

    2016-09-01

    Hydrologists and engineers may choose from a range of semidistributed rainfall-runoff models such as VIC, PDM, and TOPMODEL, all of which predict runoff from a distribution of watershed properties. However, these models are not easily compared to event-based data and are missing ready-to-use analytical expressions that are analogous to the SCS-CN method. The SCS-CN method is an event-based model that describes the runoff response with a rainfall-runoff curve that is a function of the cumulative storm rainfall and antecedent wetness condition. Here we develop an event-based probabilistic storage framework and distill semidistributed models into analytical, event-based expressions for describing the rainfall-runoff response. The event-based versions called VICx, PDMx, and TOPMODELx also are extended with a spatial description of the runoff concept of "prethreshold" and "threshold-excess" runoff, which occur, respectively, before and after infiltration exceeds a storage capacity threshold. For total storm rainfall and antecedent wetness conditions, the resulting ready-to-use analytical expressions define the source areas (fraction of the watershed) that produce runoff by each mechanism. They also define the probability density function (PDF) representing the spatial variability of runoff depths that are cumulative values for the storm duration, and the average unit area runoff, which describes the so-called runoff curve. These new event-based semidistributed models and the traditional SCS-CN method are unified by the same general expression for the runoff curve. Since the general runoff curve may incorporate different model distributions, it may ease the way for relating such distributions to land use, climate, topography, ecology, geology, and other characteristics.

  18. HPV Genotyping of Modified General Primer-Amplicons Is More Analytically Sensitive and Specific by Sequencing than by Hybridization

    PubMed Central

    Meisal, Roger; Rounge, Trine Ballestad; Christiansen, Irene Kraus; Eieland, Alexander Kirkeby; Worren, Merete Molton; Molden, Tor Faksvaag; Kommedal, Øyvind; Hovig, Eivind; Leegaard, Truls Michael

    2017-01-01

    Sensitive and specific genotyping of human papillomaviruses (HPVs) is important for population-based surveillance of carcinogenic HPV types and for monitoring vaccine effectiveness. Here we compare HPV genotyping by Next Generation Sequencing (NGS) to an established DNA hybridization method. In DNA isolated from urine, the overall analytical sensitivity of NGS was found to be 22% higher than that of hybridization. NGS was also found to be the most specific method and expanded the detection repertoire beyond the 37 types of the DNA hybridization assay. Furthermore, NGS provided an increased resolution by identifying genetic variants of individual HPV types. The same Modified General Primers (MGP)-amplicon was used in both methods. The NGS method is described in detail to facilitate implementation in the clinical microbiology laboratory and includes suggestions for new standards for detection and calling of types and variants with improved resolution. PMID:28045981

  19. HPV Genotyping of Modified General Primer-Amplicons Is More Analytically Sensitive and Specific by Sequencing than by Hybridization.

    PubMed

    Meisal, Roger; Rounge, Trine Ballestad; Christiansen, Irene Kraus; Eieland, Alexander Kirkeby; Worren, Merete Molton; Molden, Tor Faksvaag; Kommedal, Øyvind; Hovig, Eivind; Leegaard, Truls Michael; Ambur, Ole Herman

    2017-01-01

    Sensitive and specific genotyping of human papillomaviruses (HPVs) is important for population-based surveillance of carcinogenic HPV types and for monitoring vaccine effectiveness. Here we compare HPV genotyping by Next Generation Sequencing (NGS) to an established DNA hybridization method. In DNA isolated from urine, the overall analytical sensitivity of NGS was found to be 22% higher than that of hybridization. NGS was also found to be the most specific method and expanded the detection repertoire beyond the 37 types of the DNA hybridization assay. Furthermore, NGS provided an increased resolution by identifying genetic variants of individual HPV types. The same Modified General Primers (MGP)-amplicon was used in both methods. The NGS method is described in detail to facilitate implementation in the clinical microbiology laboratory and includes suggestions for new standards for detection and calling of types and variants with improved resolution.

  20. An Improved Call Admission Control Mechanism with Prioritized Handoff Queuing Scheme for BWA Networks

    NASA Astrophysics Data System (ADS)

    Chowdhury, Prasun; Saha Misra, Iti

    2014-10-01

    Nowadays, due to increased demand for using the Broadband Wireless Access (BWA) networks in a satisfactory manner a promised Quality of Service (QoS) is required to manage the seamless transmission of the heterogeneous handoff calls. To this end, this paper proposes an improved Call Admission Control (CAC) mechanism with prioritized handoff queuing scheme that aims to reduce dropping probability of handoff calls. Handoff calls are queued when no bandwidth is available even after the allowable bandwidth degradation of the ongoing calls and get admitted into the network when an ongoing call is terminated with a higher priority than the newly originated call. An analytical Markov model for the proposed CAC mechanism is developed to analyze various performance parameters. Analytical results show that our proposed CAC with handoff queuing scheme prioritizes the handoff calls effectively and reduces dropping probability of the system by 78.57% for real-time traffic without degrading the number of failed new call attempts. This results in the increased bandwidth utilization of the network.

  1. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  2. Analytical Techniques and Pharmacokinetics of Gastrodia elata Blume and Its Constituents.

    PubMed

    Wu, Jinyi; Wu, Bingchu; Tang, Chunlan; Zhao, Jinshun

    2017-07-08

    Gastrodia elata Blume ( G. elata ), commonly called Tianma in Chinese, is an important and notable traditional Chinese medicine (TCM), which has been used in China as an anticonvulsant, analgesic, sedative, anti-asthma, anti-immune drug since ancient times. The aim of this review is to provide an overview of the abundant efforts of scientists in developing analytical techniques and performing pharmacokinetic studies of G. elata and its constituents, including sample pretreatment methods, analytical techniques, absorption, distribution, metabolism, excretion (ADME) and influence factors to its pharmacokinetics. Based on the reported pharmacokinetic property data of G. elata and its constituents, it is hoped that more studies will focus on the development of rapid and sensitive analytical techniques, discovering new therapeutic uses and understanding the specific in vivo mechanisms of action of G. elata and its constituents from the pharmacokinetic viewpoint in the near future. The present review discusses analytical techniques and pharmacokinetics of G. elata and its constituents reported from 1985 onwards.

  3. Advancements in nano-enabled therapeutics for neuroHIV management.

    PubMed

    Kaushik, Ajeet; Jayant, Rahul Dev; Nair, Madhavan

    This viewpoint is a global call to promote fundamental and applied research aiming toward designing smart nanocarriers of desired properties, novel noninvasive strategies to open the blood-brain barrier (BBB), delivery/release of single/multiple therapeutic agents across the BBB to eradicate neurohuman immunodeficiency virus (HIV), strategies for on-demand site-specific release of antiretroviral therapy, developing novel nanoformulations capable to recognize and eradicate latently infected HIV reservoirs, and developing novel smart analytical diagnostic tools to detect and monitor HIV infection. Thus, investigation of novel nanoformulations, methodologies for site-specific delivery/release, analytical methods, and diagnostic tools would be of high significance to eradicate and monitor neuroacquired immunodeficiency syndrome. Overall, these developments will certainly help to develop personalized nanomedicines to cure HIV and to develop smart HIV-monitoring analytical systems for disease management.

  4. Analytical validation of a novel multiplex test for detection of advanced adenoma and colorectal cancer in symptomatic patients.

    PubMed

    Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce

    2018-05-30

    Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as a clinical test for evaluating CRC and AA risk in symptomatic individuals. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. GI-POP: a combinational annotation and genomic island prediction pipeline for ongoing microbial genome projects.

    PubMed

    Lee, Chi-Ching; Chen, Yi-Ping Phoebe; Yao, Tzu-Jung; Ma, Cheng-Yu; Lo, Wei-Cheng; Lyu, Ping-Chiang; Tang, Chuan Yi

    2013-04-10

    Sequencing of microbial genomes is important because of microbial-carrying antibiotic and pathogenetic activities. However, even with the help of new assembling software, finishing a whole genome is a time-consuming task. In most bacteria, pathogenetic or antibiotic genes are carried in genomic islands. Therefore, a quick genomic island (GI) prediction method is useful for ongoing sequencing genomes. In this work, we built a Web server called GI-POP (http://gipop.life.nthu.edu.tw) which integrates a sequence assembling tool, a functional annotation pipeline, and a high-performance GI predicting module, in a support vector machine (SVM)-based method called genomic island genomic profile scanning (GI-GPS). The draft genomes of the ongoing genome projects in contigs or scaffolds can be submitted to our Web server, and it provides the functional annotation and highly probable GI-predicting results. GI-POP is a comprehensive annotation Web server designed for ongoing genome project analysis. Researchers can perform annotation and obtain pre-analytic information include possible GIs, coding/non-coding sequences and functional analysis from their draft genomes. This pre-analytic system can provide useful information for finishing a genome sequencing project. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Modal element method for potential flow in non-uniform ducts: Combining closed form analysis with CFD

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Baumeister, Joseph F.

    1994-01-01

    An analytical procedure is presented, called the modal element method, that combines numerical grid based algorithms with eigenfunction expansions developed by separation of variables. A modal element method is presented for solving potential flow in a channel with two-dimensional cylindrical like obstacles. The infinite computational region is divided into three subdomains; the bounded finite element domain, which is characterized by the cylindrical obstacle and the surrounding unbounded uniform channel entrance and exit domains. The velocity potential is represented approximately in the grid based domain by a finite element solution and is represented analytically by an eigenfunction expansion in the uniform semi-infinite entrance and exit domains. The calculated flow fields are in excellent agreement with exact analytical solutions. By eliminating the grid surrounding the obstacle, the modal element method reduces the numerical grid size, employs a more precise far field boundary condition, as well as giving theoretical insight to the interaction of the obstacle with the mean flow. Although the analysis focuses on a specific geometry, the formulation is general and can be applied to a variety of problems as seen by a comparison to companion theories in aeroacoustics and electromagnetics.

  7. Portal scatter to primary dose ratio of 4 to 18 MV photon spectra incident on heterogeneous phantoms

    NASA Astrophysics Data System (ADS)

    Ozard, Siobhan R.

    Electronic portal imagers designed and used to verify the positioning of a cancer patient undergoing radiation treatment can also be employed to measure the in vivo dose received by the patient. This thesis investigates the ratio of the dose from patient-scattered particles to the dose from primary (unscattered) photons at the imaging plane, called the scatter to primary dose ratio (SPR). The composition of the SPR according to the origin of scatter is analyzed more thoroughly than in previous studies. A new analytical method for calculating the SPR is developed and experimentally verified for heterogeneous phantoms. A novel technique that applies the analytical SPR method for in vivo dosimetry with a portal imager is evaluated. Monte Carlo simulation was used to determine the imager dose from patient-generated electrons and photons that scatter one or more times within the object. The database of SPRs reported from this investigation is new since the contribution from patient-generated electrons was neglected by previous Monte Carlo studies. The SPR from patient-generated electrons was found here to be as large as 0.03. The analytical SPR method relies on the established result that the scatter dose is uniform for an air gap between the patient and the imager that is greater than 50 cm. This method also applies the hypothesis that first-order Compton scatter only, is sufficient for scatter estimation. A comparison of analytical and measured SPRs for neck, thorax, and pelvis phantoms showed that the maximum difference was within +/-0.03, and the mean difference was less than +/-0.01 for most cases. This accuracy was comparable to similar analytical approaches that are limited to homogeneous phantoms. The analytical SPR method could replace lookup tables of measured scatter doses that can require significant time to measure. In vivo doses were calculated by combining our analytical SPR method and the convolution/superposition algorithm. Our calculated in vivo doses agreed within +/-3% with the doses measured in the phantom. The present in vivo method was faster compared to other techniques that use convolution/superposition. Our method is a feasible and satisfactory approach that contributes to on-line patient dose monitoring.

  8. Modal ring method for the scattering of sound

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1993-01-01

    The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.

  9. Spectral properties of thermal fluctuations on simple liquid surfaces below shot-noise levels.

    PubMed

    Aoki, Kenichiro; Mitsui, Takahisa

    2012-07-01

    We study the spectral properties of thermal fluctuations on simple liquid surfaces, sometimes called ripplons. Analytical properties of the spectral function are investigated and are shown to be composed of regions with simple analytic behavior with respect to the frequency or the wave number. The derived expressions are compared to spectral measurements performed orders of magnitude below shot-noise levels, which is achieved using a novel noise reduction method. The agreement between the theory of thermal surface fluctuations and the experiment is found to be excellent, elucidating the spectral properties of the surface fluctuations. The measurement method requires relatively only a small sample both spatially (few μm) and temporally (~20 s). The method also requires relatively weak light power (~0.5 mW) so that it has a broad range of applicability, including local measurements, investigations of time-dependent phenomena, and noninvasive measurements.

  10. A systematic and feasible method for computing nuclear contributions to electrical properties of polyatomic molecules

    NASA Astrophysics Data System (ADS)

    Luis, Josep M.; Duran, Miquel; Andrés, José L.

    1997-08-01

    An analytic method to evaluate nuclear contributions to electrical properties of polyatomic molecules is presented. Such contributions control changes induced by an electric field on equilibrium geometry (nuclear relaxation contribution) and vibrational motion (vibrational contribution) of a molecular system. Expressions to compute the nuclear contributions have been derived from a power series expansion of the potential energy. These contributions to the electrical properties are given in terms of energy derivatives with respect to normal coordinates, electric field intensity or both. Only one calculation of such derivatives at the field-free equilibrium geometry is required. To show the useful efficiency of the analytical evaluation of electrical properties (the so-called AEEP method), results for calculations on water and pyridine at the SCF/TZ2P and the MP2/TZ2P levels of theory are reported. The results obtained are compared with previous theoretical calculations and with experimental values.

  11. An Introduction to MAMA (Meta-Analysis of MicroArray data) System.

    PubMed

    Zhang, Zhe; Fenstermacher, David

    2005-01-01

    Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.

  12. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  13. On the Social Validity of Behavior-Analytic Communication: A Call for Research and Description of One Method

    ERIC Educational Resources Information Center

    Critchfield, Thomas S.; Becirevic, Amel; Reed, Derek D.

    2017-01-01

    It has often been suggested that nonexperts find the communication of behavior analysts to be viscerally off-putting. We argue that this concern should be the focus of systematic research rather than mere discussion, and describe five studies that illustrate how publicly available lists of word-emotion ratings can be used to estimate the responses…

  14. A Method for Interpreting Continental and Analytic Epistemology

    DTIC Science & Technology

    1999-01-01

    solidarity. New York: Cambridge UP, 1995. Wittgenstein , Ludwig. Philosophical Investigations. Trans. G.E.M. Anscombe. Maiden, MA: Blackwell Pub., 1998...concept of language games described by Wittgenstein in the Philosophical Investigations. The definition of a language game is complicated, since "these...it is because of this relationship, or these relationships, that we call them all ’language’" (31). This leads Wittgenstein to compare the

  15. Allele-specific copy-number discovery from whole-genome and whole-exome sequencing

    PubMed Central

    Wang, WeiBo; Wang, Wei; Sun, Wei; Crowley, James J.; Szatkiewicz, Jin P.

    2015-01-01

    Copy-number variants (CNVs) are a major form of genetic variation and a risk factor for various human diseases, so it is crucial to accurately detect and characterize them. It is conceivable that allele-specific reads from high-throughput sequencing data could be leveraged to both enhance CNV detection and produce allele-specific copy number (ASCN) calls. Although statistical methods have been developed to detect CNVs using whole-genome sequence (WGS) and/or whole-exome sequence (WES) data, information from allele-specific read counts has not yet been adequately exploited. In this paper, we develop an integrated method, called AS-GENSENG, which incorporates allele-specific read counts in CNV detection and estimates ASCN using either WGS or WES data. To evaluate the performance of AS-GENSENG, we conducted extensive simulations, generated empirical data using existing WGS and WES data sets and validated predicted CNVs using an independent methodology. We conclude that AS-GENSENG not only predicts accurate ASCN calls but also improves the accuracy of total copy number calls, owing to its unique ability to exploit information from both total and allele-specific read counts while accounting for various experimental biases in sequence data. Our novel, user-friendly and computationally efficient method and a complete analytic protocol is freely available at https://sourceforge.net/projects/asgenseng/. PMID:25883151

  16. Robust Measurements of Phase Response Curves Realized via Multicycle Weighted Spike-Triggered Averages

    NASA Astrophysics Data System (ADS)

    Imai, Takashi; Ota, Kaiichiro; Aoyagi, Toshio

    2017-02-01

    Phase reduction has been extensively used to study rhythmic phenomena. As a result of phase reduction, the rhythm dynamics of a given system can be described using the phase response curve. Measuring this characteristic curve is an important step toward understanding a system's behavior. Recently, a basic idea for a new measurement method (called the multicycle weighted spike-triggered average method) was proposed. This paper confirms the validity of this method by providing an analytical proof and demonstrates its effectiveness in actual experimental systems by applying the method to an oscillating electric circuit. Some practical tips to use the method are also presented.

  17. Temenos regained: reflections on the absence of the analyst.

    PubMed

    Abramovitch, Henry

    2002-10-01

    The importance of the temenos as a metaphor to conceptualize therapeutic containment is discussed. Jung drew the analogy between the consulting room and the temenos, at the centre of the Greek Temple as a sacred and inviolate place where the analysand might encounter the Self. Although Jung believed that whether called or not, the gods would appear, under certain conditions, patients may experience 'temenos lost', the loss of the holding function of the analytic space. Two cases are presented in which temenos issues played a central role. In one case, an unorthodox method was used to preserve the analytic container during the absence of the analyst and in the other, the impact of an extra-analytical encounter had a dramatic effect on the holding function of the temenos. A discussion is presented of the appropriate circumstances in which analysts may deviate from traditional analytic practice in order to preserve the temenos and transform a 'temenos lost' into a 'temenos regained'.

  18. Arsenic, Antimony, Chromium, and Thallium Speciation in Water and Sediment Samples with the LC-ICP-MS Technique

    PubMed Central

    Jabłońska-Czapla, Magdalena

    2015-01-01

    Chemical speciation is a very important subject in the environmental protection, toxicology, and chemical analytics due to the fact that toxicity, availability, and reactivity of trace elements depend on the chemical forms in which these elements occur. Research on low analyte levels, particularly in complex matrix samples, requires more and more advanced and sophisticated analytical methods and techniques. The latest trends in this field concern the so-called hyphenated techniques. Arsenic, antimony, chromium, and (underestimated) thallium attract the closest attention of toxicologists and analysts. The properties of those elements depend on the oxidation state in which they occur. The aim of the following paper is to answer the question why the speciation analytics is so important. The paper also provides numerous examples of the hyphenated technique usage (e.g., the LC-ICP-MS application in the speciation analysis of chromium, antimony, arsenic, or thallium in water and bottom sediment samples). An important issue addressed is the preparation of environmental samples for speciation analysis. PMID:25873962

  19. TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.

    PubMed

    Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas

    2017-01-01

    Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

  20. A Chaotic Ordered Hierarchies Consistency Analysis Performance Evaluation Model

    NASA Astrophysics Data System (ADS)

    Yeh, Wei-Chang

    2013-02-01

    The Hierarchies Consistency Analysis (HCA) is proposed by Guh in-cooperated along with some case study on a Resort to reinforce the weakness of Analytical Hierarchy Process (AHP). Although the results obtained enabled aid for the Decision Maker to make more reasonable and rational verdicts, the HCA itself is flawed. In this paper, our objective is to indicate the problems of HCA, and then propose a revised method called chaotic ordered HCA (COH in short) which can avoid problems. Since the COH is based upon Guh's method, the Decision Maker establishes decisions in a way similar to that of the original method.

  1. Analytical study of temperature distribution in a rectangular porous fin considering both insulated and convective tip

    NASA Astrophysics Data System (ADS)

    Deshamukhya, Tuhin; Bhanja, Dipankar; Nath, Sujit; Maji, Ambarish; Choubey, Gautam

    2017-07-01

    The following study is concerned with determination of temperature distribution of porous fins under convective and insulated tip conditions. The authors have made an effort to study the effect of various important parameters involved in the transfer of heat through porous fins as well as the temperature distribution along the fin length subjected to both convective as well as insulated ends. The non-linear equation obtained has been solved by Adomian Decomposition method and validated with a numerical scheme called Finite Difference method by using a central difference scheme and Gauss Siedel Iterative method.

  2. Acoustic Radiation From Rotating Blades: The Kirchhoff Method in Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    2000-01-01

    This paper reviews the current status of discrete frequency noise prediction for rotating blade machinery in the time domain. There are two major approaches both of which can be classified as the Kirchhoff method. These methods depend on the solution of two linear wave equations called the K and FW-H equations. The solutions of these equations for subsonic and supersonic surfaces are discussed and some important results of the research in the past years are presented. This paper is analytical in nature and emphasizes the work of the author and coworkers at NASA Langley Research Center.

  3. Analytical characterization of wine and its precursors by capillary electrophoresis.

    PubMed

    Gomez, Federico J V; Monasterio, Romina P; Vargas, Verónica Carolina Soto; Silva, María F

    2012-08-01

    The accurate determination of marker chemical species in grape, musts, and wines presents a unique analytical challenge with high impact on diverse areas of knowledge such as health, plant physiology, and economy. Capillary electromigration techniques have emerged as a powerful tool, allowing the separation and identification of highly polar compounds that cannot be easily separated by traditional HPLC methods, providing complementary information and permitting the simultaneous analysis of analytes with different nature in a single run. The main advantage of CE over traditional methods for wine analysis is that in most cases samples require no treatment other than filtration. The purpose of this article is to present a revision on capillary electromigration methods applied to the analysis of wine and its precursors over the last decade. The current state of the art of the topic is evaluated, with special emphasis on the natural compounds that have allowed wine to be considered as a functional food. The most representative revised compounds are phenolic compounds, amino acids, proteins, elemental species, mycotoxins, and organic acids. Finally, a discussion on future trends of the role of capillary electrophoresis in the field of analytical characterization of wines for routine analysis, wine classification, as well as multidisciplinary aspects of the so-called "from soil to glass" chain is presented. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Numerical studies of the Bethe-Salpeter equation for a two-fermion bound state

    NASA Astrophysics Data System (ADS)

    de Paula, W.; Frederico, T.; Salmè, G.; Viviani, M.

    2018-03-01

    Some recent advances on the solution of the Bethe-Salpeter equation (BSE) for a two-fermion bound system directly in Minkowski space are presented. The calculations are based on the expression of the Bethe-Salpeter amplitude in terms of the so-called Nakanishi integral representation and on the light-front projection (i.e. the integration of the light-front variable k - = k 0 - k 3). The latter technique allows for the analytically exact treatment of the singularities plaguing the two-fermion BSE in Minkowski space. The good agreement observed between our results and those obtained using other existing numerical methods, based on both Minkowski and Euclidean space techniques, fully corroborate our analytical treatment.

  5. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  6. Elastic critical moment for bisymmetric steel profiles and its sensitivity by the finite difference method

    NASA Astrophysics Data System (ADS)

    Kamiński, M.; Supeł, Ł.

    2016-02-01

    It is widely known that lateral-torsional buckling of a member under bending and warping restraints of its cross-sections in the steel structures are crucial for estimation of their safety and durability. Although engineering codes for steel and aluminum structures support the designer with the additional analytical expressions depending even on the boundary conditions and internal forces diagrams, one may apply alternatively the traditional Finite Element or Finite Difference Methods (FEM, FDM) to determine the so-called critical moment representing this phenomenon. The principal purpose of this work is to compare three different ways of determination of critical moment, also in the context of structural sensitivity analysis with respect to the structural element length. Sensitivity gradients are determined by the use of both analytical and the central finite difference scheme here and contrasted also for analytical, FEM as well as FDM approaches. Computational study is provided for the entire family of the steel I- and H - beams available for the practitioners in this area, and is a basis for further stochastic reliability analysis as well as durability prediction including possible corrosion progress.

  7. [Methods of artificial intelligence: a new trend in pharmacy].

    PubMed

    Dohnal, V; Kuca, K; Jun, D

    2005-07-01

    Artificial neural networks (ANN) and genetic algorithms are one group of methods called artificial intelligence. The application of ANN on pharmaceutical data can lead to an understanding of the inner structure of data and a possibility to build a model (adaptation). In addition, for certain cases it is possible to extract rules from data. The adapted ANN is prepared for the prediction of properties of compounds which were not used in the adaptation phase. The applications of ANN have great potential in pharmaceutical industry and in the interpretation of analytical, pharmacokinetic or toxicological data.

  8. Analytical capabilities of high performance liquid chromatography - Atmospheric pressure photoionization - Orbitrap mass spectrometry (HPLC-APPI-Orbitrap-MS) for the trace determination of novel and emerging flame retardants in fish.

    PubMed

    Zacs, D; Bartkevics, V

    2015-10-22

    A new analytical method was established and validated for the analysis of 27 brominated flame retardants (BFRs), including so called "emerging" and "novel" BFRs (EBFRs and NBFRs) in fish samples. High performance liquid chromatography (HPLC) coupled to Orbitrap mass spectrometry (Orbitrap-MS) employing atmospheric pressure photoionization (APPI) interface operated in negative mode was used for the identification/quantitation of contaminants. HPLC-Orbitrap-MS analysis provided a fast separation of selected analytes within 14 min, thus demonstrating a high throughput processing of samples. The developed methodology was tested by intralaboratory validation in terms of recovery, repeatability, linear calibration ranges, instrumental and method limits of quantitation (i-LOQ and m-LOQ), and where possible, trueness was verified by analysis of certified reference materials (CRMs). Recoveries of analytes were between 80 and 119%, while the repeatability in terms of relative standard deviations (RSDs) was in the range from 1.2 to 15.5%. The measured values for both analyzed CRMs agreed with the provided consensus values, revealing the recovery of reference concentrations in 72-119% range. The elaborated method met the sensitivity criterion according to Commission Recommendation 2014/118/EU on monitoring of BFRs in food products for majority of the compounds. The concentrations of polybrominated diphenyl ethers (PBDEs) in real samples determined by HPLC-APPI-Orbitrap-MS method and validated gas chromatography-high-resolution mass spectrometry (GC-HRMS) method were found to be in a good agreement. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. 40 CFR 141.23 - Inorganic chemical sampling and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... highest analytical result. (e) All public water systems (community; non-transient, non-community; and... each subsequent sample during the quarter(s) which previously resulted in the highest analytical result...). For information on the availability of this material at NARA, call 202-741-6030, or go to: http://www...

  10. Graphical Descriptives: A Way to Improve Data Transparency and Methodological Rigor in Psychology.

    PubMed

    Tay, Louis; Parrigon, Scott; Huang, Qiming; LeBreton, James M

    2016-09-01

    Several calls have recently been issued to the social sciences for enhanced transparency of research processes and enhanced rigor in the methodological treatment of data and data analytics. We propose the use of graphical descriptives (GDs) as one mechanism for responding to both of these calls. GDs provide a way to visually examine data. They serve as quick and efficient tools for checking data distributions, variable relations, and the potential appropriateness of different statistical analyses (e.g., do data meet the minimum assumptions for a particular analytic method). Consequently, we believe that GDs can promote increased transparency in the journal review process, encourage best practices for data analysis, and promote a more inductive approach to understanding psychological data. We illustrate the value of potentially including GDs as a step in the peer-review process and provide a user-friendly online resource (www.graphicaldescriptives.org) for researchers interested in including data visualizations in their research. We conclude with suggestions on how GDs can be expanded and developed to enhance transparency. © The Author(s) 2016.

  11. NUTS and BOLTS: Applications of Fluorescence Detected Sedimentation

    PubMed Central

    Kroe, Rachel R.; Laue, Thomas M.

    2008-01-01

    Analytical ultracentrifugation is a widely used method for characterizing the solution behavior of macromolecules. However, the two commonly used detectors (absorbance and interference) impose some fundamental restrictions on the concentrations and complexity of the solutions that can be analyzed. The recent addition of a fluorescence detector for the XL-I analytical ultracentrifuge (AU-FDS) enables two different types of sedimentation experiments. First, the AU-FDS can detect picomolar concentrations of labeled solutes allowing the characterization of very dilute solutions of macromolecules, applications we call Normal Use Tracer Sedimentation (NUTS). The great sensitivity of NUTS analysis allows the characterization of small quantities of materials and high affinity interactions. Second, AU-FDS allows characterization of trace quantities of labeled molecules in solutions containing high concentrations and complex mixtures of unlabeled molecules, applications we call Biological On Line Tracer Sedimentation (BOLTS). The discrimination of BOLTS enables the size distribution of a labeled macromolecule to be determined in biological milieu such as cell lysates and serum. Examples are presented that embody features of both NUTS and BOLTS applications, along with our observations on these applications. PMID:19103145

  12. Can cloud point-based enrichment, preservation, and detection methods help to bridge gaps in aquatic nanometrology?

    PubMed

    Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan

    2016-11-01

    Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.

  13. Transformer modeling for low- and mid-frequency electromagnetic transients simulation

    NASA Astrophysics Data System (ADS)

    Lambert, Mathieu

    In this work, new models are developed for single-phase and three-phase shell-type transformers for the simulation of low-frequency transients, with the use of the coupled leakage model. This approach has the advantage that it avoids the use of fictitious windings to connect the leakage model to a topological core model, while giving the same response in short-circuit as the indefinite admittance matrix (BCTRAN) model. To further increase the model sophistication, it is proposed to divide windings into coils in the new models. However, short-circuit measurements between coils are never available. Therefore, a novel analytical method is elaborated for this purpose, which allows the calculation in 2-D of short-circuit inductances between coils of rectangular cross-section. The results of this new method are in agreement with the results obtained from the finite element method in 2-D. Furthermore, the assumption that the leakage field is approximately 2-D in shell-type transformers is validated with a 3-D simulation. The outcome of this method is used to calculate the self and mutual inductances between the coils of the coupled leakage model and the results are showing good correspondence with terminal short-circuit measurements. Typically, leakage inductances in transformers are calculated from short-circuit measurements and the magnetizing branch is calculated from no-load measurements, assuming that leakages are unimportant for the unloaded transformer and that magnetizing current is negligible during a short-circuit. While the core is assumed to have an infinite permeability to calculate short-circuit inductances, and it is a reasonable assumption since the core's magnetomotive force is negligible during a short-circuit, the same reasoning does not necessarily hold true for leakage fluxes in no-load conditions. This is because the core starts to saturate when the transformer is unloaded. To take this into account, a new analytical method is developed in this dissertation, which removes the contributions of leakage fluxes to properly calculate the magnetizing branches of the new models. However, in the new analytical method for calculating short-circuit inductances (as with other analytical methods), eddy-current losses are neglected. Similarly, winding losses are omitted in the coupled leakage model and in the new analytical method to remove leakage fluxes to calculate core parameters from no-load tests. These losses will be taken into account in future work. Both transformer models presented in this dissertation are based on the classical hypothesis that flux can be discretized into flux tubes, which is also the assumption used in a category of models called topological models. Even though these models are physically-based, there exist many topological models for a given transformer geometry. It is shown in this work that these differences can be explained in part through the concepts of divided and integral fluxes, and it is explained that divided approach is the result of mathematical manipulations, while the integral approach is more "physically-accurate". Furthermore, it is demonstrated, for the special case of a two-winding single-phase transformer, that the divided leakage inductances have to be nonlinear for both approaches to be equivalent. Even between models of the divided or integral approach models, there are differences, which arise from the particular choice of so-called flux paths" (tubes). This arbitrariness comes from the fact that with the classical hypothesis that magnetic flux can be confined into predefined flux tubes (leading to classical magnetic circuit theory), it is assumed that flux cannot leak from the sides of flux tubes. Therefore, depending on the transformer's operation conditions (degree of saturation, short-circuit, etc.), this can lead to different choices of flux tubes and different models. In this work, a new theoretical framework is developed to allow flux to leak from the sides of the tube, and generalized to include resistances and capacitances in what is called electromagnetic circuit theory. Also, it is explained that this theory is actually equivalent to what is called finite formulations (such as the finite element method), which bridges the gap between circuit theory and discrete electromagnetism. Therefore, this enables not only to develop topologically-correct transformer models, where electric and magnetic circuits are defined on dual meshes, but also rotating machine and transmission lines models (wave propagation can be taken into account).

  14. Allele-specific copy-number discovery from whole-genome and whole-exome sequencing.

    PubMed

    Wang, WeiBo; Wang, Wei; Sun, Wei; Crowley, James J; Szatkiewicz, Jin P

    2015-08-18

    Copy-number variants (CNVs) are a major form of genetic variation and a risk factor for various human diseases, so it is crucial to accurately detect and characterize them. It is conceivable that allele-specific reads from high-throughput sequencing data could be leveraged to both enhance CNV detection and produce allele-specific copy number (ASCN) calls. Although statistical methods have been developed to detect CNVs using whole-genome sequence (WGS) and/or whole-exome sequence (WES) data, information from allele-specific read counts has not yet been adequately exploited. In this paper, we develop an integrated method, called AS-GENSENG, which incorporates allele-specific read counts in CNV detection and estimates ASCN using either WGS or WES data. To evaluate the performance of AS-GENSENG, we conducted extensive simulations, generated empirical data using existing WGS and WES data sets and validated predicted CNVs using an independent methodology. We conclude that AS-GENSENG not only predicts accurate ASCN calls but also improves the accuracy of total copy number calls, owing to its unique ability to exploit information from both total and allele-specific read counts while accounting for various experimental biases in sequence data. Our novel, user-friendly and computationally efficient method and a complete analytic protocol is freely available at https://sourceforge.net/projects/asgenseng/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Utility of NIST Whole-Genome Reference Materials for the Technical Validation of a Multigene Next-Generation Sequencing Test.

    PubMed

    Shum, Bennett O V; Henner, Ilya; Belluoccio, Daniele; Hinchcliffe, Marcus J

    2017-07-01

    The sensitivity and specificity of next-generation sequencing laboratory developed tests (LDTs) are typically determined by an analyte-specific approach. Analyte-specific validations use disease-specific controls to assess an LDT's ability to detect known pathogenic variants. Alternatively, a methods-based approach can be used for LDT technical validations. Methods-focused validations do not use disease-specific controls but use benchmark reference DNA that contains known variants (benign, variants of unknown significance, and pathogenic) to assess variant calling accuracy of a next-generation sequencing workflow. Recently, four whole-genome reference materials (RMs) from the National Institute of Standards and Technology (NIST) were released to standardize methods-based validations of next-generation sequencing panels across laboratories. We provide a practical method for using NIST RMs to validate multigene panels. We analyzed the utility of RMs in validating a novel newborn screening test that targets 70 genes, called NEO1. Despite the NIST RM variant truth set originating from multiple sequencing platforms, replicates, and library types, we discovered a 5.2% false-negative variant detection rate in the RM truth set genes that were assessed in our validation. We developed a strategy using complementary non-RM controls to demonstrate 99.6% sensitivity of the NEO1 test in detecting variants. Our findings have implications for laboratories or proficiency testing organizations using whole-genome NIST RMs for testing. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  16. An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application

    ERIC Educational Resources Information Center

    Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth

    2016-01-01

    Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…

  17. A Method for Analyzing Commonalities in Clinical Trial Target Populations

    PubMed Central

    He, Zhe; Carini, Simona; Hao, Tianyong; Sim, Ida; Weng, Chunhua

    2014-01-01

    ClinicalTrials.gov presents great opportunities for analyzing commonalities in clinical trial target populations to facilitate knowledge reuse when designing eligibility criteria of future trials or to reveal potential systematic biases in selecting population subgroups for clinical research. Towards this goal, this paper presents a novel data resource for enabling such analyses. Our method includes two parts: (1) parsing and indexing eligibility criteria text; and (2) mining common eligibility features and attributes of common numeric features (e.g., A1c). We designed and built a database called “Commonalities in Target Populations of Clinical Trials” (COMPACT), which stores structured eligibility criteria and trial metadata in a readily computable format. We illustrate its use in an example analytic module called CONECT using COMPACT as the backend. Type 2 diabetes is used as an example to analyze commonalities in the target populations of 4,493 clinical trials on this disease. PMID:25954450

  18. An Analytical Assessment of NASA's N+1 Subsonic Fixed Wing Project Noise Goal

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Envia, Edmane; Burley, Casey L.

    2009-01-01

    The Subsonic Fixed Wing Project of NASA's Fundamental Aeronautics Program has adopted a noise reduction goal for new, subsonic, single-aisle, civil aircraft expected to replace current 737 and A320 airplanes. These so-called 'N+1' aircraft - designated in NASA vernacular as such since they will follow the current, in-service, 'N' airplanes - are hoped to achieve certification noise goal levels of 32 cumulative EPNdB under current Stage 4 noise regulations. A notional, N+1, single-aisle, twinjet transport with ultrahigh bypass ratio turbofan engines is analyzed in this study using NASA software and methods. Several advanced noise-reduction technologies are analytically applied to the propulsion system and airframe. Certification noise levels are predicted and compared with the NASA goal.

  19. Physics-based and human-derived information fusion for analysts

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Nagy, James; Scott, Steve; Okoth, Joshua; Hinman, Michael

    2017-05-01

    Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions, update models, and store results for distributed decision making.

  20. NASA Hydrogen Peroxide Propellant Hazards Technical Manual

    NASA Technical Reports Server (NTRS)

    Baker, David L.; Greene, Ben; Frazier, Wayne

    2005-01-01

    The Fire, Explosion, Compatibility and Safety Hazards of Hydrogen Peroxide NASA technical manual was developed at the NASA Johnson Space Center White Sands Test Facility. NASA Technical Memorandum TM-2004-213151 covers topics concerning high concentration hydrogen peroxide including fire and explosion hazards, material and fluid reactivity, materials selection information, personnel and environmental hazards, physical and chemical properties, analytical spectroscopy, specifications, analytical methods, and material compatibility data. A summary of hydrogen peroxide-related accidents, incidents, dose calls, mishaps and lessons learned is included. The manual draws from art extensive literature base and includes recent applicable regulatory compliance documentation. The manual may be obtained by United States government agencies from NASA Johnson Space Center and used as a reference source for hazards and safe handling of hydrogen peroxide.

  1. An analysis of the influence of production conditions on the development of the microporous structure of the activated carbon fibres using the LBET method

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Mirosław

    2017-12-01

    The paper presents the results of the research on the application of the new analytical models of multilayer adsorption on heterogeneous surfaces with the unique fast multivariant identification procedure, together called LBET method, as a tool for analysing the microporous structure of the activated carbon fibres obtained from polyacrylonitrile by chemical activation using potassium and sodium hydroxides. The novel LBET method was employed particularly to evaluate the impact of the used activator and the hydroxide to polyacrylonitrile ratio on the obtained microporous structure of the activated carbon fibres.

  2. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  3. Development, Validation, and Interlaboratory Evaluation of a Quantitative Multiplexing Method To Assess Levels of Ten Endogenous Allergens in Soybean Seed and Its Application to Field Trials Spanning Three Growing Seasons.

    PubMed

    Hill, Ryan C; Oman, Trent J; Wang, Xiujuan; Shan, Guomin; Schafer, Barry; Herman, Rod A; Tobias, Rowel; Shippar, Jeff; Malayappan, Bhaskar; Sheng, Li; Xu, Austin; Bradshaw, Jason

    2017-07-12

    As part of the regulatory approval process in Europe, comparison of endogenous soybean allergen levels between genetically engineered (GE) and non-GE plants has been requested. A quantitative multiplex analytical method using tandem mass spectrometry was developed and validated to measure 10 potential soybean allergens from soybean seed. The analytical method was implemented at six laboratories to demonstrate the robustness of the method and further applied to three soybean field studies across multiple growing seasons (including 21 non-GE soybean varieties) to assess the natural variation of allergen levels. The results show environmental factors contribute more than genetic factors to the large variation in allergen abundance (2- to 50-fold between environmental replicates) as well as a large contribution of Gly m 5 and Gly m 6 to the total allergen profile, calling into question the scientific rational for measurement of endogenous allergen levels between GE and non-GE varieties in the safety assessment.

  4. Enantioresolution of (RS)-baclofen by liquid chromatography: A review.

    PubMed

    Batra, Sonika; Bhushan, Ravi

    2017-01-01

    Baclofen is a commonly used racemic drug and has a simple chemical structure in terms of the presence of only one stereogenic center. Since the desirable pharmacological effect is in only one enantiomer, several possibilities exist for the other enantiomer for evaluation of the disposition of the racemic mixture of the drug. This calls for the development of enantioselective analytical methodology. This review summarizes and evaluates different methods of enantioseparation of (RS)-baclofen using both direct and indirect approaches, application of certain chiral reagents and chiral stationary phases (though very expensive). Methods of separation of diastereomers of (RS)-baclofen prepared with different chiral derivatizing reagents (under microwave irradiation at ease and in less time) on reversed-phase achiral columns or via a ligand exchange approach providing high-sensitivity detection by the relatively less expensive methods of TLC and HPLC are discussed. The methods may be helpful for determination of enantiomers in biological samples and in pharmaceutical formulations for control of enantiomeric purity and can be practiced both in analytical laboratories and industry for routine analysis and R&D activities. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Determination of the transmission coefficients for quantum structures using FDTD method.

    PubMed

    Peng, Yangyang; Wang, Xiaoying; Sui, Wenquan

    2011-12-01

    The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.

  6. Non-Gaussian Distributions Affect Identification of Expression Patterns, Functional Annotation, and Prospective Classification in Human Cancer Genomes

    PubMed Central

    Marko, Nicholas F.; Weil, Robert J.

    2012-01-01

    Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863

  7. Analysis of variation matrix array by bilinear least squares-residual bilinearization (BLLS-RBL) for resolving and quantifying of foodstuff dyes in a candy sample.

    PubMed

    Asadpour-Zeynali, Karim; Maryam Sajjadi, S; Taherzadeh, Fatemeh; Rahmanian, Reza

    2014-04-05

    Bilinear least square (BLLS) method is one of the most suitable algorithms for second-order calibration. Original BLLS method is not applicable to the second order pH-spectral data when an analyte has more than one spectroscopically active species. Bilinear least square-residual bilinearization (BLLS-RBL) was developed to achieve the second order advantage for analysis of complex mixtures. Although the modified method is useful, the pure profiles cannot be obtained and only the linear combination will be obtained. Moreover, for prediction of analyte in an unknown sample, the original algorithm of RBL may diverge; instead of converging to the desired analyte concentrations. Therefore, Gauss Newton-RLB algorithm should be used, which is not as simple as original protocol. Also, the analyte concentration can be predicted on the basis of each of the equilibrating species of the component of interest that are not exactly the same. The aim of the present work is to tackle the non-uniqueness problem in the second order calibration of monoprotic acid mixtures and divergence of RBL. Each pH-absorbance matrix was pretreated by subtraction of the first spectrum from other spectra in the data set to produce full rank array that is called variation matrix. Then variation matrices were analyzed uniquely by original BLLS-RBL that is more parsimonious than its modified counterpart. The proposed method was performed on the simulated as well as the analysis of real data. Sunset yellow and Carmosine as monoprotic acids were determined in candy sample in the presence of unknown interference by this method. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Irregular analytical errors in diagnostic testing - a novel concept.

    PubMed

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.

  9. Electromagnetic Pulse Excitation of Finite-Long Dissipative Conductors over a Conducting Ground Plane in the Time Domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry K.; Schiek, Richard

    2017-09-01

    This report details the modeling results for the response of a finite-length dissipative conductor interacting with a conducting ground to a hypothetical nuclear device with the same output energy spectrum as the Fat Man device. We use a time-domain method based on transmission line theory that allows accounting for time-varying air conductivities. We implemented such method in a code we call ATLOG - Analytic Transmission Line Over Ground. Results are compared the frequency-domain version of ATLOG previously developed and to the circuit simulator Xyce in some instances. Intentionally Left Blank

  10. Using telephony data to facilitate discovery of clinical workflows.

    PubMed

    Rucker, Donald W

    2017-04-19

    Discovery of clinical workflows to target for redesign using methods such as Lean and Six Sigma is difficult. VoIP telephone call pattern analysis may complement direct observation and EMR-based tools in understanding clinical workflows at the enterprise level by allowing visualization of institutional telecommunications activity. To build an analytic framework mapping repetitive and high-volume telephone call patterns in a large medical center to their associated clinical units using an enterprise unified communications server log file and to support visualization of specific call patterns using graphical networks. Consecutive call detail records from the medical center's unified communications server were parsed to cross-correlate telephone call patterns and map associated phone numbers to a cost center dictionary. Hashed data structures were built to allow construction of edge and node files representing high volume call patterns for display with an open source graph network tool. Summary statistics for an analysis of exactly one week's call detail records at a large academic medical center showed that 912,386 calls were placed with a total duration of 23,186 hours. Approximately half of all calling called number pairs had an average call duration under 60 seconds and of these the average call duration was 27 seconds. Cross-correlation of phone calls identified by clinical cost center can be used to generate graphical displays of clinical enterprise communications. Many calls are short. The compact data transfers within short calls may serve as automation or re-design targets. The large absolute amount of time medical center employees were engaged in VoIP telecommunications suggests that analysis of telephone call patterns may offer additional insights into core clinical workflows.

  11. Which Helper Behaviors and Intervention Styles Are Related to Better Short-Term Outcomes in Telephone Crisis Intervention? Results from a Silent Monitoring Study of Calls to the U.S. 1-800-SUICIDE Network

    ERIC Educational Resources Information Center

    Mishara, Brian L.; Chagnon, Francois; Daigle, Marc; Balan, Bogdan; Raymond, Sylvaine; Marcoux, Isabelle; Bardon, Cecile; Campbell, Julie K.; Berman, Alan

    2007-01-01

    A total of 2,611 calls to 14 helplines were monitored to observe helper behaviors and caller characteristics and changes during the calls. The relationship between intervention characteristics and call outcomes are reported for 1,431 crisis calls. Empathy and respect, as well as factor-analytically derived scales of supportive approach and good…

  12. The WOMBAT Attack Attribution Method: Some Results

    NASA Astrophysics Data System (ADS)

    Dacier, Marc; Pham, Van-Hau; Thonnard, Olivier

    In this paper, we present a new attack attribution method that has been developed within the WOMBAT project. We illustrate the method with some real-world results obtained when applying it to almost two years of attack traces collected by low interaction honeypots. This analytical method aims at identifying large scale attack phenomena composed of IP sources that are linked to the same root cause. All malicious sources involved in a same phenomenon constitute what we call a Misbehaving Cloud (MC). The paper offers an overview of the various steps the method goes through to identify these clouds, providing pointers to external references for more detailed information. Four instances of misbehaving clouds are then described in some more depth to demonstrate the meaningfulness of the concept.

  13. Cavity radiation model for solar central receivers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipps, F.W.

    1981-01-01

    The Energy Laboratory of the University of Houston has developed a computer simulation program called CREAM (i.e., Cavity Radiations Exchange Analysis Model) for application to the solar central receiver system. The zone generating capability of CREAM has been used in several solar re-powering studies. CREAM contains a geometric configuration factor generator based on Nusselt's method. A formulation of Nusselt's method provides support for the FORTRAN subroutine NUSSELT. Numerical results from NUSSELT are compared to analytic values and values from Sparrow's method. Sparrow's method is based on a double contour integral and its reduction to a single integral which is approximatedmore » by Guassian methods. Nusselt's method is adequate for the intended engineering applications, but Sparrow's method is found to be an order of magnitude more efficient in many situations.« less

  14. An Evaluation of Fractal Surface Measurement Methods for Characterizing Landscape Complexity from Remote-Sensing Imagery

    NASA Technical Reports Server (NTRS)

    Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.

  15. Determination of the optimal number of components in independent components analysis.

    PubMed

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Assays for endogenous components of human milk: comparison of fresh and frozen samples and corresponding analytes in serum.

    PubMed

    Hines, Erin P; Rayner, Jennifer L; Barbee, Randy; Moreland, Rae Ann; Valcour, Andre; Schmid, Judith E; Fenton, Suzanne E

    2007-05-01

    Breast milk is a primary source of nutrition that contains many endogenous compounds that may affect infant development. The goals of this study were to develop reliable assays for selected endogenous breast milk components and to compare levels of those in milk and serum collected from the same mother twice during lactation (2-7 weeks and 3-4 months). Reliable assays were developed for glucose, secretory IgA, interleukin-6, tumor necrosis factor-a, triglycerides, prolactin, and estradiol from participants in a US EPA study called Methods Advancement in Milk Analysis (MAMA). Fresh and frozen (-20 degrees C) milk samples were assayed to determine effects of storage on endogenous analytes. The source effect (serum vs milk) seen in all 7 analytes indicates that serum should not be used as a surrogate for milk in children's health studies. The authors propose to use these assays in studies to examine relationships between the levels of milk components and children's health.

  17. Why does the sign problem occur in evaluating the overlap of HFB wave functions?

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Oi, Makito; Shimizu, Noritaka

    2018-04-01

    For the overlap matrix element between Hartree-Fock-Bogoliubov states, there are two analytically different formulae: one with the square root of the determinant (the Onishi formula) and the other with the Pfaffian (Robledo's Pfaffian formula). The former formula is two-valued as a complex function, hence it leaves the sign of the norm overlap undetermined (i.e., the so-called sign problem of the Onishi formula). On the other hand, the latter formula does not suffer from the sign problem. The derivations for these two formulae are so different that the reasons are obscured why the resultant formulae possess different analytical properties. In this paper, we discuss the reason why the difference occurs by means of the consistent framework, which is based on the linked cluster theorem and the product-sum identity for the Pfaffian. Through this discussion, we elucidate the source of the sign problem in the Onishi formula. We also point out that different summation methods of series expansions may result in analytically different formulae.

  18. Comparison of ATLOG and Xyce for Bell Labs Electromagnetic Pulse Excitation of Finite-Long Dissipative Conductors over a Ground Plane.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    campione, Salvatore; Warne, Larry K.; Schiek, Richard

    This report details the modeling results for the response of a finite-length dissipative conductor interacting with a conducting ground to the Bell Labs electromagnetic pulse excitation. We use both a frequency-domain and a time-domain method based on transmission line theory through a code we call ATLOG - Analytic Transmission Line Over Ground. Results are compared to the circuit simulator Xyce for selected cases. Intentionally Left Blank

  19. Electromagnetic Pulse Excitation of Finite-Long Dissipative Conductors over a Conducting Ground Plane in the Frequency Domain.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    campione, Salvatore; Warne, Larry K.; Schiek, Richard

    2017-09-01

    This report details the modeling results for the response of a finite-length dissipative conductor interacting with a conducting ground to a hypothetical nuclear device with the same output energy spectrum as the Fat Man device. We use a frequency-domain method based on transmission line theory and implemented it in a code we call ATLOG - Analytic Transmission Line Over Ground. Select results are compared to ones computed using the circuit simulator Xyce. Intentionally Left Blank

  20. New trends in the analytical determination of emerging contaminants and their transformation products in environmental waters.

    PubMed

    Agüera, Ana; Martínez Bueno, María Jesús; Fernández-Alba, Amadeo R

    2013-06-01

    Since the so-called emerging contaminants were established as a new group of pollutants of environmental concern, a great effort has been devoted to the knowledge of their distribution, fate and effects in the environment. After more than 20 years of work, a significant improvement in knowledge about these contaminants has been achieved, but there is still a large gap of information on the growing number of new potential contaminants that are appearing and especially of their unpredictable transformation products. Although the environmental problem arising from emerging contaminants must be addressed from an interdisciplinary point of view, it is obvious that analytical chemistry plays an important role as the first step of the study, as it allows establishing the presence of chemicals in the environment, estimate their concentration levels, identify sources and determine their degradation pathways. These tasks involve serious difficulties requiring different analytical solutions adjusted to purpose. Thus, the complexity of the matrices requires highly selective analytical methods; the large number and variety of compounds potentially present in the samples demands the application of wide scope methods; the low concentrations at which these contaminants are present in the samples require a high detection sensitivity, and high demands on the confirmation and high structural information are needed for the characterisation of unknowns. New developments on analytical instrumentation have been applied to solve these difficulties. Furthermore and not less important has been the development of new specific software packages intended for data acquisition and, in particular, for post-run analysis. Thus, the use of sophisticated software tools has allowed successful screening analysis, determining several hundreds of analytes, and assisted in the structural elucidation of unknown compounds in a timely manner.

  1. Characterization of Cyclodextrin/Volatile Inclusion Complexes: A Review.

    PubMed

    Kfoury, Miriana; Landy, David; Fourmentin, Sophie

    2018-05-17

    Cyclodextrins (CDs) are a family of cyclic oligosaccharides that constitute one of the most widely used molecular hosts in supramolecular chemistry. Encapsulation in the hydrophobic cavity of CDs positively affects the physical and chemical characteristics of the guests upon the formation of inclusion complexes. Such a property is interestingly employed to retain volatile guests and reduce their volatility. Within this scope, the starting crucial point for a suitable and careful characterization of an inclusion complex is to assess the value of the formation constant (K f ), also called stability or binding constant. This task requires the application of the appropriate analytical method and technique. Thus, the aim of the present paper is to give a general overview of the main analytical tools used for the determination of K f values for CD/volatile inclusion complexes. This review emphasizes on the advantages, inconvenients and limits of each applied method. A special attention is also dedicated to the improvement of the current methods and to the development of new techniques. Further, the applicability of each technique is illustrated by a summary of data obtained from the literature.

  2. Advanced Research and Data Methods in Women's Health: Big Data Analytics, Adaptive Studies, and the Road Ahead.

    PubMed

    Macedonia, Christian R; Johnson, Clark T; Rajapakse, Indika

    2017-02-01

    Technical advances in science have had broad implications in reproductive and women's health care. Recent innovations in population-level data collection and storage have made available an unprecedented amount of data for analysis while computational technology has evolved to permit processing of data previously thought too dense to study. "Big data" is a term used to describe data that are a combination of dramatically greater volume, complexity, and scale. The number of variables in typical big data research can readily be in the thousands, challenging the limits of traditional research methodologies. Regardless of what it is called, advanced data methods, predictive analytics, or big data, this unprecedented revolution in scientific exploration has the potential to dramatically assist research in obstetrics and gynecology broadly across subject matter. Before implementation of big data research methodologies, however, potential researchers and reviewers should be aware of strengths, strategies, study design methods, and potential pitfalls. Examination of big data research examples contained in this article provides insight into the potential and the limitations of this data science revolution and practical pathways for its useful implementation.

  3. 3-D discrete analytical ridgelet transform.

    PubMed

    Helbert, David; Carré, Philippe; Andres, Eric

    2006-12-01

    In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.

  4. Interobserver Agreement on First-Stage Conversation Analytic Transcription

    ERIC Educational Resources Information Center

    Roberts, Felicia; Robinson, Jeffrey D.

    2004-01-01

    This investigation assesses interobserver agreement on conversation analytic (CA) transcription. Four professional CA transcribers spent a maximum of 3 hours transcribing 2.5 minutes of a previously unknown, naturally occurring, mundane telephone call. Researchers unitized transcripts into words, sounds, silences, inbreaths, outbreaths, and laugh…

  5. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  6. Advances in high-resolution mass spectrometry based on metabolomics studies for food--a review.

    PubMed

    Rubert, Josep; Zachariasova, Milena; Hajslova, Jana

    2015-01-01

    Food authenticity becomes a necessity for global food policies, since food placed in the market without fail has to be authentic. It has always been a challenge, since in the past minor components, called also markers, have been mainly monitored by chromatographic methods in order to authenticate the food. Nevertheless, nowadays, advanced analytical methods have allowed food fingerprints to be achieved. At the same time they have been also combined with chemometrics, which uses statistical methods in order to verify food and to provide maximum information by analysing chemical data. These sophisticated methods based on different separation techniques or stand alone have been recently coupled to high-resolution mass spectrometry (HRMS) in order to verify the authenticity of food. The new generation of HRMS detectors have experienced significant advances in resolving power, sensitivity, robustness, extended dynamic range, easier mass calibration and tandem mass capabilities, making HRMS more attractive and useful to the food metabolomics community, therefore becoming a reliable tool for food authenticity. The purpose of this review is to summarise and describe the most recent metabolomics approaches in the area of food metabolomics, and to discuss the strengths and drawbacks of the HRMS analytical platforms combined with chemometrics.

  7. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.

  8. Development of a high-throughput method based on thin-film microextraction using a 96-well plate system with a cork coating for the extraction of emerging contaminants in river water samples.

    PubMed

    Morés, Lucas; Dias, Adriana Neves; Carasek, Eduardo

    2018-02-01

    In this study, a new method was developed in which a biosorbent material is used as the extractor phase in conjunction with a recently described sample preparation technique called thin-film microextraction and a 96-well plate system. The method was applied for the determination of emerging contaminants, such as 3-(4-methylbenzylidene) camphor, ethylparaben, triclocarban, and bisphenol A in water samples. The separation and detection of the analytes were performed by high-performance liquid chromatography with diode array detection. These contaminants are considered hazardous to human health and other living beings. Thus, the development of an analytical method to determine these compounds is of great interest. The extraction parameters were evaluated using multivariate and univariate optimization techniques. The optimum conditions for the method were 3 h of extraction time, 20 min of desorption with 300 μL of acetonitrile and methanol (50:50, v/v), and the addition of 5% w/v sodium chloride to the sample. The analytical figures of merit showed good results with linear correlation coefficients higher than 0.99, relative recoveries of 72-125%, interday precision (n = 3) of 4-18%, and intraday precision (n = 9) of 1-21%. The limit of detection was 0.3-5.5 μg/L, and the limit of quantification was 0.8-15 μg/L. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. On the Coplanar Integrable Case of the Twice-Averaged Hill Problem with Central Body Oblateness

    NASA Astrophysics Data System (ADS)

    Vashkov'yak, M. A.

    2018-01-01

    The twice-averaged Hill problem with the oblateness of the central planet is considered in the case where its equatorial plane coincides with the plane of its orbital motion relative to the perturbing body. A qualitative study of this so-called coplanar integrable case was begun by Y. Kozai in 1963 and continued by M.L. Lidov and M.V. Yarskaya in 1974. However, no rigorous analytical solution of the problem can be obtained due to the complexity of the integrals. In this paper we obtain some quantitative evolution characteristics and propose an approximate constructive-analytical solution of the evolution system in the form of explicit time dependences of satellite orbit elements. The methodical accuracy has been estimated for several orbits of artificial lunar satellites by comparison with the numerical solution of the evolution system.

  10. Learning Analytics as Assemblage: Criticality and Contingency in Online Education

    ERIC Educational Resources Information Center

    Scott, John; Nichols, T. Philip

    2017-01-01

    Recently, the possibilities for leveraging "big data" in research and pedagogy have given rise to the growing field of "learning analytics" in online education. While much of this work has focused on quantitative metrics, some have called for critical perspectives that interrogate such data as an interplay between technical…

  11. An accurate boundary element method for the exterior elastic scattering problem in two dimensions

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Xu, Liwei; Yin, Tao

    2017-11-01

    This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.

  12. New method of extrapolation of the resistance of a model planing boat to full size

    NASA Technical Reports Server (NTRS)

    Sottorf, W

    1942-01-01

    The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.

  13. Resonance-state properties from a phase shift analysis with the S -matrix pole method and the effective-range method

    NASA Astrophysics Data System (ADS)

    Irgaziev, B. F.; Orlov, Yu. V.

    2015-02-01

    Asymptotic normalization coefficients (ANCs) are fundamental nuclear constants playing an important role in nuclear physics and astrophysics. We derive a new useful relationship between ANCs of the Gamow radial wave function and the renormalized (due to the Coulomb interaction) Coulomb-nuclear partial scattering amplitude. We use an analytical approximation in the form of a series for the nonresonant part of the phase shift which can be analytically continued to the point of an isolated resonance pole in the complex plane of the momentum. Earlier, this method which we call the S -matrix pole method was used by us to find the resonance pole energy. We find the corresponding fitting parameters for the 5He,5Li , and 16O concrete resonance states. Additionally, based on the theory of the effective range, we calculate the parameters of the p3 /2 and p1 /2 resonance states of the nuclei 5He and 5Li and compare them with the results obtained by the S -matrix pole method. ANC values are found which can be used to calculate the reaction rate through the 16O resonances which lie slightly above the threshold for the α 12C channel.

  14. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    PubMed

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  15. SDF technology in location and navigation procedures: a survey of applications

    NASA Astrophysics Data System (ADS)

    Kelner, Jan M.; Ziółkowski, Cezary

    2017-04-01

    The basis for development the Doppler location method, also called the signal Doppler frequency (SDF) method or technology is the analytical solution of the wave equation for a mobile source. This paper presents an overview of the simulations, numerical analysis and empirical studies of the possibilities and the range of SDF method applications. In the paper, the various applications from numerous publications are collected and described. They mainly focus on the use of SDF method in: emitter positioning, electronic warfare, crisis management, search and rescue, navigation. The developed method is characterized by an innovative, unique property among other location methods, because it allows the simultaneous location of the many radio emitters. Moreover, this is the first method based on the Doppler effect, which allows positioning of transmitters, using a single mobile platform. In the paper, the results of the using SDF method by the other teams are also presented.

  16. IBiSA_Tools: A Computational Toolkit for Ion-Binding State Analysis in Molecular Dynamics Trajectories of Ion Channels.

    PubMed

    Kasahara, Kota; Kinoshita, Kengo

    2016-01-01

    Ion conduction mechanisms of ion channels are a long-standing conundrum. Although the molecular dynamics (MD) method has been extensively used to simulate ion conduction dynamics at the atomic level, analysis and interpretation of MD results are not straightforward due to complexity of the dynamics. In our previous reports, we proposed an analytical method called ion-binding state analysis to scrutinize and summarize ion conduction mechanisms by taking advantage of a variety of analytical protocols, e.g., the complex network analysis, sequence alignment, and hierarchical clustering. This approach effectively revealed the ion conduction mechanisms and their dependence on the conditions, i.e., ion concentration and membrane voltage. Here, we present an easy-to-use computational toolkit for ion-binding state analysis, called IBiSA_tools. This toolkit consists of a C++ program and a series of Python and R scripts. From the trajectory file of MD simulations and a structure file, users can generate several images and statistics of ion conduction processes. A complex network named ion-binding state graph is generated in a standard graph format (graph modeling language; GML), which can be visualized by standard network analyzers such as Cytoscape. As a tutorial, a trajectory of a 50 ns MD simulation of the Kv1.2 channel is also distributed with the toolkit. Users can trace the entire process of ion-binding state analysis step by step. The novel method for analysis of ion conduction mechanisms of ion channels can be easily used by means of IBiSA_tools. This software is distributed under an open source license at the following URL: http://www.ritsumei.ac.jp/~ktkshr/ibisa_tools/.

  17. Cooperative epidemics on multiplex networks.

    PubMed

    Azimi-Tafreshi, N

    2016-04-01

    The spread of one disease, in some cases, can stimulate the spreading of another infectious disease. Here, we treat analytically a symmetric coinfection model for spreading of two diseases on a two-layer multiplex network. We allow layer overlapping, but we assume that each layer is random and locally loopless. Infection with one of the diseases increases the probability of getting infected with the other. Using the generating function method, we calculate exactly the fraction of individuals infected with both diseases (so-called coinfected clusters) in the stationary state, as well as the epidemic spreading thresholds and the phase diagram of the model. With increasing cooperation, we observe a tricritical point and the type of transition changes from continuous to hybrid. Finally, we compare the coinfected clusters in the case of cooperating diseases with the so-called "viable" clusters in networks with dependencies.

  18. Cooperative epidemics on multiplex networks

    NASA Astrophysics Data System (ADS)

    Azimi-Tafreshi, N.

    2016-04-01

    The spread of one disease, in some cases, can stimulate the spreading of another infectious disease. Here, we treat analytically a symmetric coinfection model for spreading of two diseases on a two-layer multiplex network. We allow layer overlapping, but we assume that each layer is random and locally loopless. Infection with one of the diseases increases the probability of getting infected with the other. Using the generating function method, we calculate exactly the fraction of individuals infected with both diseases (so-called coinfected clusters) in the stationary state, as well as the epidemic spreading thresholds and the phase diagram of the model. With increasing cooperation, we observe a tricritical point and the type of transition changes from continuous to hybrid. Finally, we compare the coinfected clusters in the case of cooperating diseases with the so-called "viable" clusters in networks with dependencies.

  19. Interpretation and classification of microvolt T wave alternans tests

    NASA Technical Reports Server (NTRS)

    Bloomfield, Daniel M.; Hohnloser, Stefan H.; Cohen, Richard J.

    2002-01-01

    Measurement of microvolt-level T wave alternans (TWA) during routine exercise stress testing now is possible as a result of sophisticated noise reduction techniques and analytic methods that have become commercially available. Even though this technology is new, the available data suggest that microvolt TWA is a potent predictor of arrhythmia risk in diverse disease states. As this technology becomes more widely available, physicians will be called upon to interpret microvolt TWA tracings. This review seeks to establish uniform standards for the clinical interpretation of microvolt TWA tracings.

  20. Structural dynamic analysis of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Scott, L. P.; Jamison, G. T.; Mccutcheon, W. A.; Price, J. M.

    1981-01-01

    This structural dynamic analysis supports development of the SSME by evaluating components subjected to critical dynamic loads, identifying significant parameters, and evaluating solution methods. Engine operating parameters at both rated and full power levels are considered. Detailed structural dynamic analyses of operationally critical and life limited components support the assessment of engine design modifications and environmental changes. Engine system test results are utilized to verify analytic model simulations. The SSME main chamber injector assembly is an assembly of 600 injector elements which are called LOX posts. The overall LOX post analysis procedure is shown.

  1. Quantitative structure-retention relationships applied to development of liquid chromatography gradient-elution method for the separation of sartans.

    PubMed

    Golubović, Jelena; Protić, Ana; Otašević, Biljana; Zečević, Mira

    2016-04-01

    QSRR are mathematically derived relationships between the chromatographic parameters determined for a representative series of analytes in given separation systems and the molecular descriptors accounting for the structural differences among the investigated analytes. Artificial neural network is a technique of data analysis, which sets out to emulate the human brain's way of working. The aim of the present work was to optimize separation of six angiotensin receptor antagonists, so-called sartans: losartan, valsartan, irbesartan, telmisartan, candesartan cilexetil and eprosartan in a gradient-elution HPLC method. For this purpose, ANN as a mathematical tool was used for establishing a QSRR model based on molecular descriptors of sartans and varied instrumental conditions. The optimized model can be further used for prediction of an external congener of sartans and analysis of the influence of the analyte structure, represented through molecular descriptors, on retention behaviour. Molecular descriptors included in modelling were electrostatic, geometrical and quantum-chemical descriptors: connolly solvent excluded volume non-1,4 van der Waals energy, octanol/water distribution coefficient, polarizability, number of proton-donor sites and number of proton-acceptor sites. Varied instrumental conditions were gradient time, buffer pH and buffer molarity. High prediction ability of the optimized network enabled complete separation of the analytes within the run time of 15.5 min under following conditions: gradient time of 12.5 min, buffer pH of 3.95 and buffer molarity of 25 mM. Applied methodology showed the potential to predict retention behaviour of an external analyte with the properties within the training space. Connolly solvent excluded volume, polarizability and number of proton-acceptor sites appeared to be most influential paramateres on retention behaviour of the sartans. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Analytical modeling of gravity changes and crustal deformation at volcanoes: The Long Valley caldera, California, case study

    USGS Publications Warehouse

    Battaglia, Maurizio; Hill, D.P.

    2009-01-01

    Joint measurements of ground deformation and micro-gravity changes are an indispensable component for any volcano monitoring strategy. A number of analytical mathematical models are available in the literature that can be used to fit geodetic data and infer source location, depth and density. Bootstrap statistical methods allow estimations of the range of the inferred parameters. Although analytical models often assume that the crust is elastic, homogenous and isotropic, they can take into account different source geometries, the influence of topography, and gravity background noise. The careful use of analytical models, together with high quality data sets, can produce valuable insights into the nature of the deformation/gravity source. Here we present a review of various modeling methods, and use the historical unrest at Long Valley caldera (California) from 1982 to 1999 to illustrate the practical application of analytical modeling and bootstrap to constrain the source of unrest. A key question is whether the unrest at Long Valley since the late 1970s can be explained without calling upon an intrusion of magma. The answer, apparently, is no. Our modeling indicates that the inflation source is a slightly tilted prolate ellipsoid (dip angle between 91?? and 105??) at a depth of 6.5 to 7.9??km beneath the caldera resurgent dome with an aspect ratio between 0.44 and 0.60, a volume change from 0.161 to 0.173??km3 and a density of 1241 to 2093??kg/m3. The larger uncertainty of the density estimate reflects the higher noise of gravity measurements. These results are consistent with the intrusion of silicic magma with a significant amount of volatiles beneath the caldera resurgent dome. ?? 2008 Elsevier B.V.

  3. Quantum cluster theory for the polarizable continuum model. I. The CCSD level with analytical first and second derivatives.

    PubMed

    Cammi, R

    2009-10-28

    We present a general formulation of the coupled-cluster (CC) theory for a molecular solute described within the framework of the polarizable continuum model (PCM). The PCM-CC theory is derived in its complete form, called PTDE scheme, in which the correlated electronic density is used to have a self-consistent reaction field, and in an approximate form, called PTE scheme, in which the PCM-CC equations are solved assuming the fixed Hartree-Fock solvent reaction field. Explicit forms for the PCM-CC-PTDE equations are derived at the single and double (CCSD) excitation level of the cluster operator. At the same level, explicit equations for the analytical first derivatives of the PCM basic energy functional are presented, and analytical second derivatives are also discussed. The corresponding PCM-CCSD-PTE equations are given as a special case of the full theory.

  4. A new numerical method for calculating extrema of received power for polarimetric SAR

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Lu, Z.; Gong, W.

    2009-01-01

    A numerical method called cross-step iteration is proposed to calculate the maximal/minimal received power for polarized imagery based on a target's Kennaugh matrix. This method is much more efficient than the systematic method, which searches for the extrema of received power by varying the polarization ellipse angles of receiving and transmitting polarizations. It is also more advantageous than the Schuler method, which has been adopted by the PolSARPro package, because the cross-step iteration method requires less computation time and can derive both the maximal and minimal received powers, whereas the Schuler method is designed to work out only the maximal received power. The analytical model of received-power optimization indicates that the first eigenvalue of the Kennaugh matrix is the supremum of the maximal received power. The difference between these two parameters reflects the depolarization effect of the target's backscattering, which might be useful for target discrimination. ?? 2009 IEEE.

  5. Detecting very low allele fraction variants using targeted DNA sequencing and a novel molecular barcode-aware variant caller.

    PubMed

    Xu, Chang; Nezami Ranjbar, Mohammad R; Wu, Zhong; DiCarlo, John; Wang, Yexun

    2017-01-03

    Detection of DNA mutations at very low allele fractions with high accuracy will significantly improve the effectiveness of precision medicine for cancer patients. To achieve this goal through next generation sequencing, researchers need a detection method that 1) captures rare mutation-containing DNA fragments efficiently in the mix of abundant wild-type DNA; 2) sequences the DNA library extensively to deep coverage; and 3) distinguishes low level true variants from amplification and sequencing errors with high accuracy. Targeted enrichment using PCR primers provides researchers with a convenient way to achieve deep sequencing for a small, yet most relevant region using benchtop sequencers. Molecular barcoding (or indexing) provides a unique solution for reducing sequencing artifacts analytically. Although different molecular barcoding schemes have been reported in recent literature, most variant calling has been done on limited targets, using simple custom scripts. The analytical performance of barcode-aware variant calling can be significantly improved by incorporating advanced statistical models. We present here a highly efficient, simple and scalable enrichment protocol that integrates molecular barcodes in multiplex PCR amplification. In addition, we developed smCounter, an open source, generic, barcode-aware variant caller based on a Bayesian probabilistic model. smCounter was optimized and benchmarked on two independent read sets with SNVs and indels at 5 and 1% allele fractions. Variants were called with very good sensitivity and specificity within coding regions. We demonstrated that we can accurately detect somatic mutations with allele fractions as low as 1% in coding regions using our enrichment protocol and variant caller.

  6. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays

    PubMed Central

    Goodman, Corey W.; Major, Heather J.; Walls, William D.; Sheffield, Val C.; Casavant, Thomas L.; Darbro, Benjamin W.

    2016-01-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. PMID:25595567

  7. MS-based analytical methodologies to characterize genetically modified crops.

    PubMed

    García-Cañas, Virginia; Simó, Carolina; León, Carlos; Ibáñez, Elena; Cifuentes, Alejandro

    2011-01-01

    The development of genetically modified crops has had a great impact on the agriculture and food industries. However, the development of any genetically modified organism (GMO) requires the application of analytical procedures to confirm the equivalence of the GMO compared to its isogenic non-transgenic counterpart. Moreover, the use of GMOs in foods and agriculture faces numerous criticisms from consumers and ecological organizations that have led some countries to regulate their production, growth, and commercialization. These regulations have brought about the need of new and more powerful analytical methods to face the complexity of this topic. In this regard, MS-based technologies are increasingly used for GMOs analysis to provide very useful information on GMO composition (e.g., metabolites, proteins). This review focuses on the MS-based analytical methodologies used to characterize genetically modified crops (also called transgenic crops). First, an overview on genetically modified crops development is provided, together with the main difficulties of their analysis. Next, the different MS-based analytical approaches applied to characterize GM crops are critically discussed, and include "-omics" approaches and target-based approaches. These methodologies allow the study of intended and unintended effects that result from the genetic transformation. This information is considered to be essential to corroborate (or not) the equivalence of the GM crop with its isogenic non-transgenic counterpart. Copyright © 2010 Wiley Periodicals, Inc.

  8. Almost analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2017-11-01

    We present an almost analytical new approach to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of solving this matrix eigenvalue problem purely numerically, which may suffer from the computational inaccuracy for big data, first, we consider a pair of integral and differential equations, which are related to the so-called prolate spheroidal wave functions (PSWF). For the PSWF differential equation, the pair of the eigenvectors (PSWF) and eigenvalues can be obtained from a relatively small number of analytical Legendre functions. Then, the eigenvalues in the PSWF integral equation are expressed in terms of functional values of the PSWF and the eigenvalues of the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data; ordinary irregular waves and rogue waves. We found that the present almost analytical method is better than the conventional data-independent Fourier representation and, also, the conventional direct numerical K-L representation in terms of both accuracy and computational cost. This work was supported by the National Research Foundation of Korea (NRF). (NRF-2017R1D1A1B03028299).

  9. Elucidation of several neglected reactions in the GC-MS identification of sialic acids as heptafluorobutyrates calls for an urgent reassessment of previous claims.

    PubMed

    Rota, Paola; Anastasia, Luigi; Allevi, Pietro

    2015-05-07

    The current analytical protocol used for the GC-MS determination of free or 1,7-lactonized natural sialic acids (Sias), as heptafluorobutyrates, overlooks several transformations. Using authentic reference standards and by combining GC-MS and NMR analyses, flaws in the analytical protocol were pinpointed and elucidated, thus establishing the scope and limitations of the method. It was demonstrated that (a) Sias 1,7-lactones, even if present in biological samples, decompose under the acidic hydrolysis conditions used for their release; (b) Sias 1,7-lactones are unpredicted artifacts, accidentally generated from their parent acids; (c) the N-acetyl group is quantitatively exchanged with that of the derivatizing perfluorinated anhydride; (d) the partial or complete failure of the Sias esterification-step with diazomethane leads to the incorrect quantification and structure attribution of all free Sias. While these findings prompt an urgent correction and improvement of the current analytical protocol, they could be instrumental for a critical revision of many incorrect claims reported in the literature.

  10. Digital Analytics in Professional Work and Learning

    ERIC Educational Resources Information Center

    Edwards, Richard; Fenwick, Tara

    2016-01-01

    In a wide range of fields, professional practice is being transformed by the increasing influence of digital analytics: the massive volumes of big data, and software algorithms that are collecting, comparing and calculating that data to make predictions and even decisions. Researchers in a number of social sciences have been calling attention to…

  11. Analytics for Knowledge Creation: Towards Epistemic Agency and Design-Mode Thinking

    ERIC Educational Resources Information Center

    Chen, Bodong; Zhang, Jianwei

    2016-01-01

    Innovation and knowledge creation call for high-level epistemic agency and design-mode thinking, two competencies beyond the traditional scopes of schooling. In this paper, we discuss the need for learning analytics to support these two competencies, and more broadly, the demand for education for innovation. We ground these arguments on a…

  12. How do gut feelings feature in tutorial dialogues on diagnostic reasoning in GP traineeship?

    PubMed

    Stolper, C F; Van de Wiel, M W J; Hendriks, R H M; Van Royen, P; Van Bokhoven, M A; Van der Weijden, T; Dinant, G J

    2015-05-01

    Diagnostic reasoning is considered to be based on the interaction between analytical and non-analytical cognitive processes. Gut feelings, a specific form of non-analytical reasoning, play a substantial role in diagnostic reasoning by general practitioners (GPs) and may activate analytical reasoning. In GP traineeships in the Netherlands, trainees mostly see patients alone but regularly consult with their supervisors to discuss patients and problems, receive feedback, and improve their competencies. In the present study, we examined the discussions of supervisors and their trainees about diagnostic reasoning in these so-called tutorial dialogues and how gut feelings feature in these discussions. 17 tutorial dialogues focussing on diagnostic reasoning were video-recorded and transcribed and the protocols were analysed using a detailed bottom-up and iterative content analysis and coding procedure. The dialogues were segmented into quotes. Each quote received a content code and a participant code. The number of words per code was used as a unit of analysis to quantitatively compare the contributions to the dialogues made by supervisors and trainees, and the attention given to different topics. The dialogues were usually analytical reflections on a trainee's diagnostic reasoning. A hypothetico-deductive strategy was often used, by listing differential diagnoses and discussing what information guided the reasoning process and might confirm or exclude provisional hypotheses. Gut feelings were discussed in seven dialogues. They were used as a tool in diagnostic reasoning, inducing analytical reflection, sometimes on the entire diagnostic reasoning process. The emphasis in these tutorial dialogues was on analytical components of diagnostic reasoning. Discussing gut feelings in tutorial dialogues seems to be a good educational method to familiarize trainees with non-analytical reasoning. Supervisors need specialised knowledge about these aspects of diagnostic reasoning and how to deal with them in medical education.

  13. Synthesis of active controls for flutter suppression on a flight research wing

    NASA Technical Reports Server (NTRS)

    Abel, I.; Perry, B., III; Murrow, H. N.

    1977-01-01

    This paper describes some activities associated with the preliminary design of an active control system for flutter suppression capable of demonstrating a 20% increase in flutter velocity. Results from two control system synthesis techniques are given. One technique uses classical control theory, and the other uses an 'aerodynamic energy method' where control surface rates or displacements are minimized. Analytical methods used to synthesize the control systems and evaluate their performance are described. Some aspects of a program for flight testing the active control system are also given. This program, called DAST (Drones for Aerodynamics and Structural Testing), employs modified drone-type vehicles for flight assessments and validation testing.

  14. Fast Quantitative Analysis Of Museum Objects Using Laser-Induced Breakdown Spectroscopy And Multiple Regression Algorithms

    NASA Astrophysics Data System (ADS)

    Lorenzetti, G.; Foresta, A.; Palleschi, V.; Legnaioli, S.

    2009-09-01

    The recent development of mobile instrumentation, specifically devoted to in situ analysis and study of museum objects, allows the acquisition of many LIBS spectra in very short time. However, such large amount of data calls for new analytical approaches which would guarantee a prompt analysis of the results obtained. In this communication, we will present and discuss the advantages of statistical analytical methods, such as Partial Least Squares Multiple Regression algorithms vs. the classical calibration curve approach. PLS algorithms allows to obtain in real time the information on the composition of the objects under study; this feature of the method, compared to the traditional off-line analysis of the data, is extremely useful for the optimization of the measurement times and number of points associated with the analysis. In fact, the real time availability of the compositional information gives the possibility of concentrating the attention on the most `interesting' parts of the object, without over-sampling the zones which would not provide useful information for the scholars or the conservators. Some example on the applications of this method will be presented, including the studies recently performed by the researcher of the Applied Laser Spectroscopy Laboratory on museum bronze objects.

  15. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. A Model of Risk Analysis in Analytical Methodology for Biopharmaceutical Quality Control.

    PubMed

    Andrade, Cleyton Lage; Herrera, Miguel Angel De La O; Lemes, Elezer Monte Blanco

    2018-01-01

    One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA. To determine small amounts of DNA (around 100 pg) that may be in a biologically derived drug substance, an analytical method should be sensitive, robust, reliable, and accurate. In principle, three techniques have the ability to measure residual cellular DNA: radioactive dot-blot, a type of hybridization; threshold analysis; and quantitative polymerase chain reaction. Quality risk management is a systematic process for evaluating, controlling, and reporting of risks that may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates, by quality risk management, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool hazard analysis and critical control points. This tool provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we conclude that the radioactive dot-blot assay has the largest number of critical control points, followed by quantitative polymerase chain reaction, and threshold analysis. From the analysis of hazards (i.e., points of method failure) and the associated method procedure critical control points, we conclude that the analytical methodology with the lowest risk for performance failure for residual cellular DNA testing is quantitative polymerase chain reaction. LAY ABSTRACT: In order to mitigate the risk of adverse events by residual cellular DNA that is not completely cleared from downstream production processes, regulatory agencies have required the industry to guarantee a very low level of DNA in biologically derived pharmaceutical products. The technique historically used was radioactive blot hybridization. However, the technique is a challenging method to implement in a quality control laboratory: It is laborious, time consuming, semi-quantitative, and requires a radioisotope. Along with dot-blot hybridization, two alternatives techniques were evaluated: threshold analysis and quantitative polymerase chain reaction. Quality risk management tools were applied to compare the techniques, taking into account the uncertainties, the possibility of circumstances or future events, and their effects upon method performance. By illustrating the application of these tools with DNA methods, we provide an example of how they can be used to support a scientific and practical approach to decision making and can assess and manage method performance risk using such tools. This paper discusses, considering the principles of quality risk management, an additional approach to the development and selection of analytical quality control methods using the risk analysis tool hazard analysis and critical control points. This tool provides the possibility to find the method procedural steps with higher impact on method reliability (called critical control points). Our model concluded that the radioactive dot-blot assay has the larger number of critical control points, followed by quantitative polymerase chain reaction and threshold analysis. Quantitative polymerase chain reaction is shown to be the better alternative analytical methodology in residual cellular DNA analysis. © PDA, Inc. 2018.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Qi, Hairong

    This paper addresses the communication and energy efficiency in collaborative visual sensor networks (VSNs) for people localization, a challenging computer vision problem of its own. We focus on the design of a light-weight and energy efficient solution where people are localized based on distributed camera nodes integrating the so-called certainty map generated at each node, that records the target non-existence information within the camera s field of view. We first present a dynamic itinerary for certainty map integration where not only each sensor node transmits a very limited amount of data but that a limited number of camera nodes ismore » involved. Then, we perform a comprehensive analytical study to evaluate communication and energy efficiency between different integration schemes, i.e., centralized and distributed integration. Based on results obtained from analytical study and real experiments, the distributed method shows effectiveness in detection accuracy as well as energy and bandwidth efficiency.« less

  18. Analytical study on the thermal performance of a partially wet constructal T-shaped fin

    NASA Astrophysics Data System (ADS)

    Hazarika, Saheera Azmi; Zeeshan, Mohd; Bhanja, Dipankar; Nath, Sujit

    2017-07-01

    The present paper addresses the thermal analysis of a T-shaped fin under partially wet condition by adopting a cubic variation of the humidity ratio of saturated air with the corresponding fin surface temperature. The point separating the dry and wet parts may lie either in the flange or stem part of the fin and so, two different cases having different governing equations and boundary conditions are analyzed in this paper. Since the governing equations are highly non-linear, they are solved by using an analytical technique called the Differential Transform Method and subsequently, the dry fin length, temperature distribution and fin performances are evaluated and analyzed for a wide range of the various psychometric, geometric and thermo-physical parameters. Finally, it can be highlighted that relative humidity has a pronounced effect on the performance parameters when the fin surface is partially wet whereas this effect is marginally small for fully wet surface.

  19. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less

  20. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  1. Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments

    PubMed Central

    Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria

    2015-01-01

    Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162

  2. On Conducting Construct Validity Meta-Analyses for the Rorschach: A Reply to Tibon Czopp and Zeligman (2016).

    PubMed

    Mihura, Joni L; Meyer, Gregory J; Dumitrascu, Nicolae; Bombel, George

    2016-01-01

    We respond to Tibon Czopp and Zeligman's (2016) critique of our systematic reviews and meta-analyses of 65 Rorschach Comprehensive System (CS) variables published in Psychological Bulletin (2013). The authors endorsed our supportive findings but critiqued the same methodology when used for the 13 unsupported variables. Unfortunately, their commentary was based on significant misunderstandings of our meta-analytic method and results, such as thinking we used introspectively assessed criteria in classifying levels of support and reporting only a subset of our externally assessed criteria. We systematically address their arguments that our construct label and criterion variable choices were inaccurate and, therefore, meta-analytic validity for these 13 CS variables was artificially low. For example, the authors created new construct labels for these variables that they called "the customary CS interpretation," but did not describe their methodology nor provide evidence that their labels would result in better validity than ours. They cite studies they believe we should have included; we explain how these studies did not fit our inclusion criteria and that including them would have actually reduced the relevant CS variables' meta-analytic validity. Ultimately, criticisms alone cannot change meta-analytic support from negative to positive; Tibon Czopp and Zeligman would need to conduct their own construct validity meta-analyses.

  3. Bessel Fourier Orientation Reconstruction (BFOR): An Analytical Diffusion Propagator Reconstruction for Hybrid Diffusion Imaging and Computation of q-Space Indices

    PubMed Central

    Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Alexander, Andrew L.

    2012-01-01

    The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents. The EAP can thus provide richer information about complex tissue microstructure properties than the orientation distribution function (ODF), an angular feature of the EAP. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed, such as diffusion propagator imaging (DPI) and spherical polar Fourier imaging (SPFI). In this study, a new analytical EAP reconstruction method is proposed, called Bessel Fourier orientation reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition, and is validated on both synthetic and real datasets. A significant portion of the paper is dedicated to comparing BFOR, SPFI, and DPI using hybrid, non-Cartesian sampling for multiple b-value acquisitions. Ways to mitigate the effects of Gibbs ringing on EAP reconstruction are also explored. In addition to analytical EAP reconstruction, the aforementioned modeling bases can be used to obtain rotationally invariant q-space indices of potential clinical value, an avenue which has not yet been thoroughly explored. Three such measures are computed: zero-displacement probability (Po), mean squared displacement (MSD), and generalized fractional anisotropy (GFA). PMID:22963853

  4. Evolutionary neural networks for anomaly detection based on the behavior of a program.

    PubMed

    Han, Sang-Jun; Cho, Sung-Bae

    2006-06-01

    The process of learning the behavior of a given program by using machine-learning techniques (based on system-call audit data) is effective to detect intrusions. Rule learning, neural networks, statistics, and hidden Markov models (HMMs) are some of the kinds of representative methods for intrusion detection. Among them, neural networks are known for good performance in learning system-call sequences. In order to apply this knowledge to real-world problems successfully, it is important to determine the structures and weights of these call sequences. However, finding the appropriate structures requires very long time periods because there are no suitable analytical solutions. In this paper, a novel intrusion-detection technique based on evolutionary neural networks (ENNs) is proposed. One advantage of using ENNs is that it takes less time to obtain superior neural networks than when using conventional approaches. This is because they discover the structures and weights of the neural networks simultaneously. Experimental results with the 1999 Defense Advanced Research Projects Agency (DARPA) Intrusion Detection Evaluation (IDEVAL) data confirm that ENNs are promising tools for intrusion detection.

  5. Taking the Edusemiotic Turn: A Body~Mind Approach to Education

    ERIC Educational Resources Information Center

    Semetsky, Inna

    2014-01-01

    Educational philosophy in English-speaking countries tends to be informed mainly by analytic philosophy common to Western thinking. A welcome alternative is provided by pragmatism in the tradition of Peirce, James and Dewey. Still, the habit of the so-called linguistic turn has a firm grip in terms of analytic philosophy based on the logic of…

  6. A new multi-domain method based on an analytical control surface for linear and second-order mean drift wave loads on floating bodies

    NASA Astrophysics Data System (ADS)

    Liang, Hui; Chen, Xiaobo

    2017-10-01

    A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.

  7. Psychometrics Matter in Health Behavior: A Long-term Reliability Generalization Study.

    PubMed

    Pickett, Andrew C; Valdez, Danny; Barry, Adam E

    2017-09-01

    Despite numerous calls for increased understanding and reporting of reliability estimates, social science research, including the field of health behavior, has been slow to respond and adopt such practices. Therefore, we offer a brief overview of reliability and common reporting errors; we then perform analyses to examine and demonstrate the variability of reliability estimates by sample and over time. Using meta-analytic reliability generalization, we examined the variability of coefficient alpha scores for a well-designed, consistent, nationwide health study, covering a span of nearly 40 years. For each year and sample, reliability varied. Furthermore, reliability was predicted by a sample characteristic that differed among age groups within each administration. We demonstrated that reliability is influenced by the methods and individuals from which a given sample is drawn. Our work echoes previous calls that psychometric properties, particularly reliability of scores, are important and must be considered and reported before drawing statistical conclusions.

  8. Modeling of unit operating considerations in generating-capacity reliability evaluation. Volume 1. Mathematical models, computing methods, and results. Final report. [GENESIS, OPCON and OPPLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Singh, C.

    1982-07-01

    Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less

  9. Electrostatics of proteins in dielectric solvent continua. II. Hamiltonian reaction field dynamics

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Tavan, Paul; Mathias, Gerald

    2014-03-01

    In Paper I of this work [S. Bauer, G. Mathias, and P. Tavan, J. Chem. Phys. 140, 104102 (2014)] we have presented a reaction field (RF) method, which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of polarizable molecular mechanics (MM) force fields. Building upon these results, here we suggest a method for linearly scaling Hamiltonian RF/MM molecular dynamics (MD) simulations, which we call "Hamiltonian dielectric solvent" (HADES). First, we derive analytical expressions for the RF forces acting on the solute atoms. These forces properly account for all those conditions, which have to be self-consistently fulfilled by RF quantities introduced in Paper I. Next we provide details on the implementation, i.e., we show how our RF approach is combined with a fast multipole method and how the self-consistency iterations are accelerated by the use of the so-called direct inversion in the iterative subspace. Finally we demonstrate that the method and its implementation enable Hamiltonian, i.e., energy and momentum conserving HADES-MD, and compare in a sample application on Ac-Ala-NHMe the HADES-MD free energy landscape at 300 K with that obtained in Paper I by scanning of configurations and with one obtained from an explicit solvent simulation.

  10. Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, H.; Liu, J.; Pan, Y.

    2012-07-01

    The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.

  11. HIV cure research community engagement in North Carolina: a mixed-methods evaluation of a crowdsourcing contest.

    PubMed

    Mathews, Allison; Farley, Samantha; Blumberg, Meredith; Knight, Kimberley; Hightow-Weidman, Lisa; Muessig, Kate; Rennie, Stuart; Tucker, Joseph

    2017-10-01

    The purpose of this study was to evaluate the feasibility of using a crowdsourcing contest to promote HIV cure research community engagement. Crowdsourcing contests are open calls for community participation to achieve a task, in this case to engage local communities about HIV cure research. Our contest solicited images and videos of what HIV cure meant to people. Contestants submitted entries to IdeaScale, an encrypted online contest platform. We used a mixed-methods study design to evaluate the contest. Engagement was assessed through attendance at promotional events and social media user analytics. Google Analytics measured contest website user-engagement statistics. Text from contest video entries was transcribed, coded and analysed using MAXQDA. There were 144 attendees at three promotional events and 32 entries from 39 contestants. Most individuals who submitted entries were black ( n =31), had some college education ( n =18) and were aged 18-23 years ( n =23). Social media analytics showed 684 unique page followers, 2233 unique page visits, 585 unique video views and an overall reach of 80,624 unique users. Contest submissions covered themes related to the community's role in shaping the future of HIV cure through education, social justice, creativity and stigma reduction. Crowdsourcing contests are feasible for engaging community members in HIV cure research. Community contributions to crowdsourcing contests provide useful content for culturally relevant and locally responsive research engagement.

  12. Validation of surrogate endpoints in cancer clinical trials via principal stratification with an application to a prostate cancer trial.

    PubMed

    Tanaka, Shiro; Matsuyama, Yutaka; Ohashi, Yasuo

    2017-08-30

    Increasing attention has been focused on the use and validation of surrogate endpoints in cancer clinical trials. Previous literature on validation of surrogate endpoints are classified into four approaches: the proportion explained approach; the indirect effects approach; the meta-analytic approach; and the principal stratification approach. The mainstream in cancer research has seen the application of a meta-analytic approach. However, VanderWeele (2013) showed that all four of these approaches potentially suffer from the surrogate paradox. It was also shown that, if a principal surrogate satisfies additional criteria called one-sided average causal sufficiency, the surrogate cannot exhibit a surrogate paradox. Here, we propose a method for estimating principal effects under a monotonicity assumption. Specifically, we consider cancer clinical trials which compare a binary surrogate endpoint and a time-to-event clinical endpoint under two naturally ordered treatments (e.g. combined therapy vs. monotherapy). Estimation based on a mean score estimating equation will be implemented by the expectation-maximization algorithm. We will also apply the proposed method as well as other surrogacy criteria to evaluate the surrogacy of prostate-specific antigen using data from a phase III advanced prostate cancer trial, clarifying the complementary roles of both the principal stratification and meta-analytic approaches in the evaluation of surrogate endpoints in cancer. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Prioritizing the causes and correctors of smoking towards the solution of tobacco free future using enhanced analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Halim, Tisya Farida Abdul; Sapiri, Hasimah; Abidin, Norhaslinda Zainal

    2017-11-01

    This paper presents a method for prioritizing the causes and correctors of smoking habits in Malaysia. In order to identify the driving forces that causes (initiation factors) smoking habits and its correctors (anti-smoking strategies), a method called Enhanced Analytic Hierarchy Process (EAHP) is employed. The EAHP has advantages over normal Analytic Hierarchy Process (AHP) based on its capability to eliminate inconsistency (consistency ratio > 0.1) in evaluating expert's judgment. Based on the Theory of Triadic Influence, the identified initiation factors were personal beliefs and values, personal psychological, family influence, psychosocial influence, culture and legislative. There are five anti-smoking strategies that have been implemented in Malaysia, namely packaging and labelling, pricing and taxation, advertising, smoke-free legislation and education and support. Findings from the study shows that psychosocial influence was considered as the initiation factor of smoking among Malaysian adults, and mass media campaign was the most effective anti-smoking strategies to reduce smoking prevalence. The implementation of an effective anti-smoking strategies should be considered towards the endgame of tobacco by the year 2040 as outlined by the government. The findings in turn can provide insights and guidelines for researchers as well as policy makers to assess the effectiveness of anti-smoking strategies towards a better policy planning decisions in the future.

  14. Identification of missing variants by combining multiple analytic pipelines.

    PubMed

    Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W

    2018-04-16

    After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic association analyses. The current analytic practice of calling genetic variants from sequencing data using a single bioinformatics pipeline is no longer adequate with the increasingly large projects. The number and percentage of quality variants that passed quality filters but are missed by the one-pipeline approach rapidly increased with sample size.

  15. A non-grey analytical model for irradiated atmospheres. II. Analytical vs. numerical solutions

    NASA Astrophysics Data System (ADS)

    Parmentier, Vivien; Guillot, Tristan; Fortney, Jonathan J.; Marley, Mark S.

    2015-02-01

    Context. The recent discovery and characterization of the diversity of the atmospheres of exoplanets and brown dwarfs calls for the development of fast and accurate analytical models. Aims: We wish to assess the goodness of the different approximations used to solve the radiative transfer problem in irradiated atmospheres analytically, and we aim to provide a useful tool for a fast computation of analytical temperature profiles that remains correct over a wide range of atmospheric characteristics. Methods: We quantify the accuracy of the analytical solution derived in paper I for an irradiated, non-grey atmosphere by comparing it to a state-of-the-art radiative transfer model. Then, using a grid of numerical models, we calibrate the different coefficients of our analytical model for irradiated solar-composition atmospheres of giant exoplanets and brown dwarfs. Results: We show that the so-called Eddington approximation used to solve the angular dependency of the radiation field leads to relative errors of up to ~5% on the temperature profile. For grey or semi-grey atmospheres (i.e., when the visible and thermal opacities, respectively, can be considered independent of wavelength), we show that the presence of a convective zone has a limited effect on the radiative atmosphere above it and leads to modifications of the radiative temperature profile of approximately ~2%. However, for realistic non-grey planetary atmospheres, the presence of a convective zone that extends to optical depths smaller than unity can lead to changes in the radiative temperature profile on the order of 20% or more. When the convective zone is located at deeper levels (such as for strongly irradiated hot Jupiters), its effect on the radiative atmosphere is again on the same order (~2%) as in the semi-grey case. We show that the temperature inversion induced by a strong absorber in the optical, such as TiO or VO is mainly due to non-grey thermal effects reducing the ability of the upper atmosphere to cool down rather than an enhanced absorption of the stellar light as previously thought. Finally, we provide a functional form for the coefficients of our analytical model for solar-composition giant exoplanets and brown dwarfs. This leads to fully analytical pressure-temperature profiles for irradiated atmospheres with a relative accuracy better than 10% for gravities between 2.5 m s-2 and 250 m s-2 and effective temperatures between 100 K and 3000 K. This is a great improvement over the commonly used Eddington boundary condition. A FORTRAN implementation of the analytical model is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A35 or at http://www.oca.eu/parmentier/nongrey.Appendix A is available in electronic form at http://www.aanda.org

  16. How Radiologists Think: Understanding Fast and Slow Thought Processing and How It Can Improve Our Teaching.

    PubMed

    van der Gijp, Anouk; Webb, Emily M; Naeger, David M

    2017-06-01

    Scholars have identified two distinct ways of thinking. This "Dual Process Theory" distinguishes a fast, nonanalytical way of thinking, called "System 1," and a slow, analytical way of thinking, referred to as "System 2." In radiology, we use both methods when interpreting and reporting images, and both should ideally be emphasized when educating our trainees. This review provides practical tips for improving radiology education, by enhancing System 1 and System 2 thinking among our trainees. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  17. Nothing but the truth: self-disclosure, self-revelation, and the persona of the analyst.

    PubMed

    Levine, Susan S

    2007-01-01

    The question of the analyst's self-disclosure and self-revelation inhabits every moment of every psychoanalytic treatment. All self-disclosures and revelations, however, are not equivalent, and differentiating among them allows us to define a construct that can be called the analytic persona. Analysts already rely on an unarticulated concept of an analytic persona that guides them, for instance, as they decide what constitutes appropriate boundaries. Clinical examples illustrate how self-disclosures and revelations from within and without the analytic persona feel different, for both patient and analyst. The analyst plays a specific role for each patient and is both purposefully and unconsciously different in this context than in other settings. To a great degree, the self is a relational phenomenon. Our ethics call for us to tell nothing but the truth and simultaneously for us not to tell the whole truth. The unarticulated working concept of an analytic persona that many analysts have refers to the self we step out of at the close of each session and the self we step into as the patient enters the room. Attitudes toward self-disclosure and self-revelation can be considered reflections of how we conceptualize this persona.

  18. Novel two-step laser ablation and ionization mass spectrometry (2S-LAIMS) of actor-spectator ice layers: Probing chemical composition of D{sub 2}O ice beneath a H{sub 2}O ice layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui, E-mail: ryang73@ustc.edu; Gudipati, Murthy S., E-mail: gudipati@jpl.nasa.gov

    2014-03-14

    In this work, we report for the first time successful analysis of organic aromatic analytes imbedded in D{sub 2}O ices by novel infrared (IR) laser ablation of a layered non-absorbing D{sub 2}O ice (spectator) containing the analytes and an ablation-active IR-absorbing H{sub 2}O ice layer (actor) without the analyte. With these studies we have opened up a new method for the in situ analysis of solids containing analytes when covered with an IR laser-absorbing layer that can be resonantly ablated. This soft ejection method takes advantage of the tenability of two-step infrared laser ablation and ultraviolet laser ionization mass spectrometry,more » previously demonstrated in this lab to study chemical reactions of polycyclic aromatic hydrocarbons (PAHs) in cryogenic ices. The IR laser pulse tuned to resonantly excite only the upper H{sub 2}O ice layer (actor) generates a shockwave upon impact. This shockwave penetrates the lower analyte-containing D{sub 2}O ice layer (spectator, a non-absorbing ice that cannot be ablated directly with the wavelength of the IR laser employed) and is reflected back, ejecting the contents of the D{sub 2}O layer into the vacuum where they are intersected by a UV laser for ionization and detection by a time-of-flight mass spectrometer. Thus, energy is transmitted from the laser-absorbing actor layer into the non-absorbing spectator layer resulting its ablation. We found that isotope cross-contamination between layers was negligible. We also did not see any evidence for thermal or collisional chemistry of PAH molecules with H{sub 2}O molecules in the shockwave. We call this “shockwave mediated surface resonance enhanced subsurface ablation” technique as “two-step laser ablation and ionization mass spectrometry of actor-spectator ice layers.” This method has its roots in the well-established MALDI (matrix assisted laser desorption and ionization) method. Our method offers more flexibility to optimize both the processes—ablation and ionization. This new technique can thus be potentially employed to undertake in situ analysis of materials imbedded in diverse media, such as cryogenic ices, biological samples, tissues, minerals, etc., by covered with an IR-absorbing laser ablation medium and study the chemical composition and reaction pathways of the analyte in its natural surroundings.« less

  19. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays.

    PubMed

    Goodman, Corey W; Major, Heather J; Walls, William D; Sheffield, Val C; Casavant, Thomas L; Darbro, Benjamin W

    2015-04-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Recovering Paleo-Records from Antarctic Ice-Cores by Coupling a Continuous Melting Device and Fast Ion Chromatography.

    PubMed

    Severi, Mirko; Becagli, Silvia; Traversi, Rita; Udisti, Roberto

    2015-11-17

    Recently, the increasing interest in the understanding of global climatic changes and on natural processes related to climate yielded the development and improvement of new analytical methods for the analysis of environmental samples. The determination of trace chemical species is a useful tool in paleoclimatology, and the techniques for the analysis of ice cores have evolved during the past few years from laborious measurements on discrete samples to continuous techniques allowing higher temporal resolution, higher sensitivity and, above all, higher throughput. Two fast ion chromatographic (FIC) methods are presented. The first method was able to measure Cl(-), NO3(-) and SO4(2-) in a melter-based continuous flow system separating the three analytes in just 1 min. The second method (called Ultra-FIC) was able to perform a single chromatographic analysis in just 30 s and the resulting sampling resolution was 1.0 cm with a typical melting rate of 4.0 cm min(-1). Both methods combine the accuracy, precision, and low detection limits of ion chromatography with the enhanced speed and high depth resolution of continuous melting systems. Both methods have been tested and validated with the analysis of several hundred meters of different ice cores. In particular, the Ultra-FIC method was used to reconstruct the high-resolution SO4(2-) profile of the last 10,000 years for the EDML ice core, allowing the counting of the annual layers, which represents a key point in dating these kind of natural archives.

  1. Isosteric heat of hydrogen adsorption on MOFs: comparison between adsorption calorimetry, sorption isosteric method, and analytical models

    NASA Astrophysics Data System (ADS)

    Kloutse, A. F.; Zacharia, R.; Cossement, D.; Chahine, R.; Balderas-Xicohténcatl, R.; Oh, H.; Streppel, B.; Schlichtenmayer, M.; Hirscher, M.

    2015-12-01

    Isosteric heat of adsorption is an important parameter required to describe the thermal performance of adsorptive storage systems. It is most frequently calculated from adsorption isotherms measured over wide ranges of pressure and temperature, using the so-called adsorption isosteric method. Direct quantitative estimation of isosteric heats on the other hand is possible using the coupled calorimetric-volumetric method, which involves simultaneous measurement of heat and adsorption. In this work, we compare the isosteric heats of hydrogen adsorption on microporous materials measured by both methods. Furthermore, the experimental data are compared with the isosteric heats obtained using the modified Dubinin-Astakhov, Tóth, and Unilan adsorption analytical models to establish the reliability and limitations of simpler methods and assumptions. To this end, we measure the hydrogen isosteric heats on five prototypical metal-organic frameworks: MOF-5, Cu-BTC, Fe-BTC, MIL-53, and MOF-177 using both experimental methods. For all MOFs, we find a very good agreement between the isosteric heats measured using the calorimetric and isosteric methods throughout the range of loading studied. Models' prediction on the other hand deviates from both experiments depending on the MOF studied and the range of loading. Under low-loadings of less than 5 mol kg-1, the isosteric heat of hydrogen adsorption decreases in the order Cu-BTC > MIL-53 > MOF-5 > Fe-BTC > MOF-177. The order of isosteric heats is coherent with the strength of hydrogen interaction revealed from previous thermal desorption spectroscopy measurements.

  2. Exact analytical solution of a classical Josephson tunnel junction problem

    NASA Astrophysics Data System (ADS)

    Kuplevakhsky, S. V.; Glukhov, A. M.

    2010-10-01

    We give an exact and complete analytical solution of the classical problem of a Josephson tunnel junction of arbitrary length W ɛ(0,∞) in the presence of external magnetic fields and transport currents. Contrary to a wide-spread belief, the exact analytical solution unambiguously proves that there is no qualitative difference between so-called "small" (W≪1) and "large" junctions (W≫1). Another unexpected physical implication of the exact analytical solution is the existence (in the current-carrying state) of unquantized Josephson vortices carrying fractional flux and located near one of the edges of the junction. We also refine the mathematical definition of critical transport current.

  3. Simultaneous screening and quantification of 29 drugs of abuse in oral fluid by solid-phase extraction and ultraperformance LC-MS/MS.

    PubMed

    Badawi, Nora; Simonsen, Kirsten Wiese; Steentoft, Anni; Bernhoft, Inger Marie; Linnet, Kristian

    2009-11-01

    The European DRUID (Driving under the Influence of Drugs, Alcohol And Medicines) project calls for analysis of oral fluid (OF) samples, collected randomly and anonymously at the roadside from drivers in Denmark throughout 2008-2009. To analyze these samples we developed an ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method for detection of 29 drugs and illicit compounds in OF. The drugs detected were opioids, amphetamines, cocaine, benzodiazepines, and Delta-9-tetrahydrocannabinol. Solid-phase extraction was performed with a Gilson ASPEC XL4 system equipped with Bond Elut Certify sample cartridges. OF samples (200 mg) diluted with 5 mL of ammonium acetate/methanol (vol/vol 90:10) buffer were applied to the columns and eluted with 3 mL of acetonitrile with aqueous ammonium hydroxide. Target drugs were quantified by use of a Waters ACQUITY UPLC system coupled to a Waters Quattro Premier XE triple quadrupole (positive electrospray ionization mode, multiple reaction monitoring mode). Extraction recoveries were 36%-114% for all analytes, including Delta-9-tetrahydrocannabinol and benzoylecgonine. The lower limit of quantification was 0.5 mug/kg for all analytes. Total imprecision (CV) was 5.9%-19.4%. With the use of deuterated internal standards for most compounds, the performance of the method was not influenced by matrix effects. A preliminary account of OF samples collected at the roadside showed the presence of amphetamine, cocaine, codeine, Delta-9-tetrahydrocannabinol, tramadol, and zopiclone. The UPLC-MS/MS method makes it possible to detect all 29 analytes in 1 chromatographic run (15 min), including Delta-9-tetrahydrocannabinol and benzoylecgonine, which previously have been difficult to incorporate into multicomponent methods.

  4. Analytic approximations of Von Kármán plate under arbitrary uniform pressure—equations in integral form

    NASA Astrophysics Data System (ADS)

    Zhong, XiaoXu; Liao, ShiJun

    2018-01-01

    Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.

  5. Punctuated evolution and robustness in morphogenesis

    PubMed Central

    Grigoriev, D.; Reinitz, J.; Vakulenko, S.; Weber, A.

    2014-01-01

    This paper presents an analytic approach to the pattern stability and evolution problem in morphogenesis. The approach used here is based on the ideas from the gene and neural network theory. We assume that gene networks contain a number of small groups of genes (called hubs) controlling morphogenesis process. Hub genes represent an important element of gene network architecture and their existence is empirically confirmed. We show that hubs can stabilize morphogenetic pattern and accelerate the morphogenesis. The hub activity exhibits an abrupt change depending on the mutation frequency. When the mutation frequency is small, these hubs suppress all mutations and gene product concentrations do not change, thus, the pattern is stable. When the environmental pressure increases and the population needs new genotypes, the genetic drift and other effects increase the mutation frequency. For the frequencies that are larger than a critical amount the hubs turn off; and as a result, many mutations can affect phenotype. This effect can serve as an engine for evolution. We show that this engine is very effective: the evolution acceleration is an exponential function of gene redundancy. Finally, we show that the Eldredge-Gould concept of punctuated evolution results from the network architecture, which provides fast evolution, control of evolvability, and pattern robustness. To describe analytically the effect of exponential acceleration, we use mathematical methods developed recently for hard combinatorial problems, in particular, for so-called k-SAT problem, and numerical simulations. PMID:24996115

  6. Optical properties of electrohydrodynamic convection patterns: rigorous and approximate methods.

    PubMed

    Bohley, Christian; Heuer, Jana; Stannarius, Ralf

    2005-12-01

    We analyze the optical behavior of two-dimensionally periodic structures that occur in electrohydrodynamic convection (EHC) patterns in nematic sandwich cells. These structures are anisotropic, locally uniaxial, and periodic on the scale of micrometers. For the first time, the optics of these structures is investigated with a rigorous method. The method used for the description of the electromagnetic waves interacting with EHC director patterns is a numerical approach that discretizes directly the Maxwell equations. It works as a space-grid-time-domain method and computes electric and magnetic fields in time steps. This so-called finite-difference-time-domain (FDTD) method is able to generate the fields with arbitrary accuracy. We compare this rigorous method with earlier attempts based on ray-tracing and analytical approximations. Results of optical studies of EHC structures made earlier based on ray-tracing methods are confirmed for thin cells, when the spatial periods of the pattern are sufficiently large. For the treatment of small-scale convection structures, the FDTD method is without alternatives.

  7. A Conserving Discretization for the Free Boundary in a Two-Dimensional Stefan Problem

    NASA Astrophysics Data System (ADS)

    Segal, Guus; Vuik, Kees; Vermolen, Fred

    1998-03-01

    The dissolution of a disk-likeAl2Cuparticle is considered. A characteristic property is that initially the particle has a nonsmooth boundary. The mathematical model of this dissolution process contains a description of the particle interface, of which the position varies in time. Such a model is called a Stefan problem. It is impossible to obtain an analytical solution for a general two-dimensional Stefan problem, so we use the finite element method to solve this problem numerically. First, we apply a classical moving mesh method. Computations show that after some time steps the predicted particle interface becomes very unrealistic. Therefore, we derive a new method for the displacement of the free boundary based on the balance of atoms. This method leads to good results, also, for nonsmooth boundaries. Some numerical experiments are given for the dissolution of anAl2Cuparticle in anAl-Cualloy.

  8. Double Diffusive Magnetohydrodynamic (MHD) Mixed Convective Slip Flow along a Radiating Moving Vertical Flat Plate with Convective Boundary Condition

    PubMed Central

    Rashidi, Mohammad M.; Kavyani, Neda; Abelman, Shirley; Uddin, Mohammed J.; Freidoonimehr, Navid

    2014-01-01

    In this study combined heat and mass transfer by mixed convective flow along a moving vertical flat plate with hydrodynamic slip and thermal convective boundary condition is investigated. Using similarity variables, the governing nonlinear partial differential equations are converted into a system of coupled nonlinear ordinary differential equations. The transformed equations are then solved using a semi-numerical/analytical method called the differential transform method and results are compared with numerical results. Close agreement is found between the present method and the numerical method. Effects of the controlling parameters, including convective heat transfer, magnetic field, buoyancy ratio, hydrodynamic slip, mixed convective, Prandtl number and Schmidt number are investigated on the dimensionless velocity, temperature and concentration profiles. In addition effects of different parameters on the skin friction factor, , local Nusselt number, , and local Sherwood number are shown and explained through tables. PMID:25343360

  9. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  10. Asynchronous multilevel adaptive methods for solving partial differential equations on multiprocessors - Performance results

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.

  11. Evaluation of Analytical Modeling Functions for the Phonation Onset Process.

    PubMed

    Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael

    2016-01-01

    The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.

  12. A Generalized Michaelis-Menten Equation in Protein Synthesis: Effects of Mis-Charged Cognate tRNA and Mis-Reading of Codon.

    PubMed

    Dutta, Annwesha; Chowdhury, Debashish

    2017-05-01

    The sequence of amino acid monomers in the primary structure of a protein is decided by the corresponding sequence of codons (triplets of nucleic acid monomers) on the template messenger RNA (mRNA). The polymerization of a protein, by incorporation of the successive amino acid monomers, is carried out by a molecular machine called ribosome. We develop a stochastic kinetic model that captures the possibilities of mis-reading of mRNA codon and prior mis-charging of a tRNA. By a combination of analytical and numerical methods, we obtain the distribution of the times taken for incorporation of the successive amino acids in the growing protein in this mathematical model. The corresponding exact analytical expression for the average rate of elongation of a nascent protein is a 'biologically motivated' generalization of the Michaelis-Menten formula for the average rate of enzymatic reactions. This generalized Michaelis-Menten-like formula (and the exact analytical expressions for a few other quantities) that we report here display the interplay of four different branched pathways corresponding to selection of four different types of tRNA.

  13. Nanoscaled aptasensors for multi-analyte sensing

    PubMed Central

    Saberian-Borujeni, Mehdi; Johari-Ahar, Mohammad; Hamzeiy, Hossein; Barar, Jaleh; Omidi, Yadollah

    2014-01-01

    Introduction: Nanoscaled aptamers (Aps), as short single-stranded DNA or RNA oligonucleotides, are able to bind to their specific targets with high affinity, upon which they are considered as powerful diagnostic and analytical sensing tools (the so-called "aptasensors"). Aptamers are selected from a random pool of oligonucleotides through a procedure known as "systematic evolution of ligands by exponential enrichment". Methods: In this work, the most recent studies in the field of aptasensors are reviewed and discussed with a main focus on the potential of aptasensors for the multianalyte detection(s). Results: Due to the specific folding capability of aptamers in the presence of analyte, aptasensors have substantially successfully been exploited for the detection of a wide range of small and large molecules (e.g., drugs and their metabolites, toxins, and associated biomarkers in various diseases) at very low concentrations in the biological fluids/samples even in presence of interfering species. Conclusion: Biological samples are generally considered as complexes in the real biological media. Hence, the development of aptasensors with capability to determine various targets simultaneously within a biological matrix seems to be our main challenge. To this end, integration of various key scientific dominions such as bioengineering and systems biology with biomedical researches are inevitable. PMID:25671177

  14. A three-dimensional analytical model to simulate groundwater flow during operation of recirculating wells

    NASA Astrophysics Data System (ADS)

    Huang, Junqi; Goltz, Mark N.

    2005-11-01

    The potential for using pairs of so-called horizontal flow treatment wells (HFTWs) to effect in situ capture and treatment of contaminated groundwater has recently been demonstrated. To apply this new technology, design engineers need to be able to simulate the relatively complex groundwater flow patterns that result from HFTW operation. In this work, a three-dimensional analytical solution for steady flow in a homogeneous, anisotropic, contaminated aquifer is developed to efficiently calculate the interflow of water circulating between a pair of HFTWs and map the spatial extent of contaminated groundwater flowing from upgradient that is captured. The solution is constructed by superposing the solutions for the flow fields resulting from operation of partially penetrating wells. The solution is used to investigate the flow resulting from operation of an HFTW well pair and to quantify how aquifer anisotropy, well placement, and pumping rate impact capture zone width and interflow. The analytical modeling method presented here provides a fast and accurate technique for representing the flow field resulting from operation of HFTW systems, and represents a tool that can be useful in designing in situ groundwater contamination treatment systems.

  15. Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method

    NASA Astrophysics Data System (ADS)

    De Waal, Sybrand A.

    1996-07-01

    A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.

  16. Efficient visualization of urban spaces

    NASA Astrophysics Data System (ADS)

    Stamps, A. E.

    2012-10-01

    This chapter presents a new method for calculating efficiency and applies that method to the issues of selecting simulation media and evaluating the contextual fit of new buildings in urban spaces. The new method is called "meta-analysis". A meta-analytic review of 967 environments indicated that static color simulations are the most efficient media for visualizing urban spaces. For contextual fit, four original experiments are reported on how strongly five factors influence visual appeal of a street: architectural style, trees, height of a new building relative to the heights of existing buildings, setting back a third story, and distance. A meta-analysis of these four experiments and previous findings, covering 461 environments, indicated that architectural style, trees, and height had effects strong enough to warrant implementation, but the effects of setting back third stories and distance were too small to warrant implementation.

  17. A surface plasmon resonance based biochip for the detection of patulin toxin

    NASA Astrophysics Data System (ADS)

    Pennacchio, Anna; Ruggiero, Giuseppe; Staiano, Maria; Piccialli, Gennaro; Oliviero, Giorgia; Lewkowicz, Aneta; Synak, Anna; Bojarski, Piotr; D'Auria, Sabato

    2014-08-01

    Patulin is a toxic secondary metabolite of a number of fungal species belonging to the genera Penicillium and Aspergillus. One important aspect of the patulin toxicity in vivo is an injury of the gastrointestinal tract including ulceration and inflammation of the stomach and intestine. Recently, patulin has been shown to be genotoxic by causing oxidative damage to the DNA, and oxidative DNA base modifications have been considered to play a role in mutagenesis and cancer initiation. Conventional analytical methods for patulin detection involve chromatographic analyses, such as HPLC, GC, and, more recently, techniques such as LC/MS and GC/MS. All of these methods require the use of extensive protocols and the use of expensive analytical instrumentation. In this work, the conjugation of a new derivative of patulin to the bovine serum albumin for the production of polyclonal antibodies is described, and an innovative competitive immune-assay for detection of patulin is presented. Experimentally, an important part of the detection method is based on the optical technique called surface plasmon resonance (SPR). Laser beam induced interactions between probe and target molecules in the vicinity of gold surface of the biochip lead to the shift in resonance conditions and consequently to slight but easily detectable change of reflectivity.

  18. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  19. A DFFD simulation method combined with the spectral element method for solid-fluid-interaction problems

    NASA Astrophysics Data System (ADS)

    Chen, Li-Chieh; Huang, Mei-Jiau

    2017-02-01

    A 2D simulation method for a rigid body moving in an incompressible viscous fluid is proposed. It combines one of the immersed-boundary methods, the DFFD (direct forcing fictitious domain) method with the spectral element method; the former is employed for efficiently capturing the two-way FSI (fluid-structure interaction) and the geometric flexibility of the latter is utilized for any possibly co-existing stationary and complicated solid or flow boundary. A pseudo body force is imposed within the solid domain to enforce the rigid body motion and a Lagrangian mesh composed of triangular elements is employed for tracing the rigid body. In particular, a so called sub-cell scheme is proposed to smooth the discontinuity at the fluid-solid interface and to execute integrations involving Eulerian variables over the moving-solid domain. The accuracy of the proposed method is verified through an observed agreement of the simulation results of some typical flows with analytical solutions or existing literatures.

  20. Designing a more efficient, effective and safe Medical Emergency Team (MET) service using data analysis

    PubMed Central

    Bilgrami, Irma; Bain, Christopher; Webb, Geoffrey I.; Orosz, Judit; Pilcher, David

    2017-01-01

    Introduction Hospitals have seen a rise in Medical Emergency Team (MET) reviews. We hypothesised that the commonest MET calls result in similar treatments. Our aim was to design a pre-emptive management algorithm that allowed direct institution of treatment to patients without having to wait for attendance of the MET team and to model its potential impact on MET call incidence and patient outcomes. Methods Data was extracted for all MET calls from the hospital database. Association rule data mining techniques were used to identify the most common combinations of MET call causes, outcomes and therapies. Results There were 13,656 MET calls during the 34-month study period in 7936 patients. The most common MET call was for hypotension [31%, (2459/7936)]. These MET calls were strongly associated with the immediate administration of intra-venous fluid (70% [1714/2459] v 13% [739/5477] p<0.001), unless the patient was located on a respiratory ward (adjusted OR 0.41 [95%CI 0.25–0.67] p<0.001), had a cardiac cause for admission (adjusted OR 0.61 [95%CI 0.50–0.75] p<0.001) or was under the care of the heart failure team (adjusted OR 0.29 [95%CI 0.19–0.42] p<0.001). Modelling the effect of a pre-emptive management algorithm for immediate fluid administration without MET activation on data from a test period of 24 months following the study period, suggested it would lead to a 68.7% (2541/3697) reduction in MET calls for hypotension and a 19.6% (2541/12938) reduction in total METs without adverse effects on patients. Conclusion Routinely collected data and analytic techniques can be used to develop a pre-emptive management algorithm to administer intravenous fluid therapy to a specific group of hypotensive patients without the need to initiate a MET call. This could both lead to earlier treatment for the patient and less total MET calls. PMID:29281665

  1. Happy software developers solve problems better: psychological measurements in empirical software engineering

    PubMed Central

    Wang, Xiaofeng; Abrahamsson, Pekka

    2014-01-01

    For more than thirty years, it has been claimed that a way to improve software developers’ productivity and software quality is to focus on people and to provide incentives to make developers satisfied and happy. This claim has rarely been verified in software engineering research, which faces an additional challenge in comparison to more traditional engineering fields: software development is an intellectual activity and is dominated by often-neglected human factors (called human aspects in software engineering research). Among the many skills required for software development, developers must possess high analytical problem-solving skills and creativity for the software construction process. According to psychology research, affective states—emotions and moods—deeply influence the cognitive processing abilities and performance of workers, including creativity and analytical problem solving. Nonetheless, little research has investigated the correlation between the affective states, creativity, and analytical problem-solving performance of programmers. This article echoes the call to employ psychological measurements in software engineering research. We report a study with 42 participants to investigate the relationship between the affective states, creativity, and analytical problem-solving skills of software developers. The results offer support for the claim that happy developers are indeed better problem solvers in terms of their analytical abilities. The following contributions are made by this study: (1) providing a better understanding of the impact of affective states on the creativity and analytical problem-solving capacities of developers, (2) introducing and validating psychological measurements, theories, and concepts of affective states, creativity, and analytical-problem-solving skills in empirical software engineering, and (3) raising the need for studying the human factors of software engineering by employing a multidisciplinary viewpoint. PMID:24688866

  2. Happy software developers solve problems better: psychological measurements in empirical software engineering.

    PubMed

    Graziotin, Daniel; Wang, Xiaofeng; Abrahamsson, Pekka

    2014-01-01

    For more than thirty years, it has been claimed that a way to improve software developers' productivity and software quality is to focus on people and to provide incentives to make developers satisfied and happy. This claim has rarely been verified in software engineering research, which faces an additional challenge in comparison to more traditional engineering fields: software development is an intellectual activity and is dominated by often-neglected human factors (called human aspects in software engineering research). Among the many skills required for software development, developers must possess high analytical problem-solving skills and creativity for the software construction process. According to psychology research, affective states-emotions and moods-deeply influence the cognitive processing abilities and performance of workers, including creativity and analytical problem solving. Nonetheless, little research has investigated the correlation between the affective states, creativity, and analytical problem-solving performance of programmers. This article echoes the call to employ psychological measurements in software engineering research. We report a study with 42 participants to investigate the relationship between the affective states, creativity, and analytical problem-solving skills of software developers. The results offer support for the claim that happy developers are indeed better problem solvers in terms of their analytical abilities. The following contributions are made by this study: (1) providing a better understanding of the impact of affective states on the creativity and analytical problem-solving capacities of developers, (2) introducing and validating psychological measurements, theories, and concepts of affective states, creativity, and analytical-problem-solving skills in empirical software engineering, and (3) raising the need for studying the human factors of software engineering by employing a multidisciplinary viewpoint.

  3. An explicit closed-form analytical solution for European options under the CGMY model

    NASA Astrophysics Data System (ADS)

    Chen, Wenting; Du, Meiyu; Xu, Xiang

    2017-01-01

    In this paper, we consider the analytical pricing of European path-independent options under the CGMY model, which is a particular type of pure jump Le´vy process, and agrees well with many observed properties of the real market data by allowing the diffusions and jumps to have both finite and infinite activity and variation. It is shown that, under this model, the option price is governed by a fractional partial differential equation (FPDE) with both the left-side and right-side spatial-fractional derivatives. In comparison to derivatives of integer order, fractional derivatives at a point not only involve properties of the function at that particular point, but also the information of the function in a certain subset of the entire domain of definition. This ;globalness; of the fractional derivatives has added an additional degree of difficulty when either analytical methods or numerical solutions are attempted. Albeit difficult, we still have managed to derive an explicit closed-form analytical solution for European options under the CGMY model. Based on our solution, the asymptotic behaviors of the option price and the put-call parity under the CGMY model are further discussed. Practically, a reliable numerical evaluation technique for the current formula is proposed. With the numerical results, some analyses of impacts of four key parameters of the CGMY model on European option prices are also provided.

  4. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  5. Multi-parameter flow cytometry as a process analytical technology (PAT) approach for the assessment of bacterial ghost production.

    PubMed

    Langemann, Timo; Mayr, Ulrike Beate; Meitz, Andrea; Lubitz, Werner; Herwig, Christoph

    2016-01-01

    Flow cytometry (FCM) is a tool for the analysis of single-cell properties in a cell suspension. In this contribution, we present an improved FCM method for the assessment of E-lysis in Enterobacteriaceae. The result of the E-lysis process is empty bacterial envelopes-called bacterial ghosts (BGs)-that constitute potential products in the pharmaceutical field. BGs have reduced light scattering properties when compared with intact cells. In combination with viability information obtained from staining samples with the membrane potential-sensitive fluorescent dye bis-(1,3-dibutylarbituric acid) trimethine oxonol (DiBAC4(3)), the presented method allows to differentiate between populations of viable cells, dead cells, and BGs. Using a second fluorescent dye RH414 as a membrane marker, non-cellular background was excluded from the data which greatly improved the quality of the results. Using true volumetric absolute counting, the FCM data correlated well with cell count data obtained from colony-forming units (CFU) for viable populations. Applicability of the method to several Enterobacteriaceae (different Escherichia coli strains, Salmonella typhimurium, Shigella flexneri 2a) could be shown. The method was validated as a resilient process analytical technology (PAT) tool for the assessment of E-lysis and for particle counting during 20-l batch processes for the production of Escherichia coli Nissle 1917 BGs.

  6. Seismic instantaneous frequency extraction based on the SST-MAW

    NASA Astrophysics Data System (ADS)

    Liu, Naihao; Gao, Jinghuai; Jiang, Xiudi; Zhang, Zhuosheng; Wang, Ping

    2018-06-01

    The instantaneous frequency (IF) extraction of seismic data has been widely applied to seismic exploration for decades, such as detecting seismic absorption and characterizing depositional thicknesses. Based on the complex-trace analysis, the Hilbert transform (HT) can extract the IF directly, which is a traditional method and susceptible to noise. In this paper, a robust approach based on the synchrosqueezing transform (SST) is proposed to extract the IF from seismic data. In this process, a novel analytical wavelet is developed and chosen as the basic wavelet, which is called the modified analytical wavelet (MAW) and comes from the three parameter wavelet. After transforming the seismic signal into a sparse time-frequency domain via the SST taking the MAW (SST-MAW), an adaptive threshold is introduced to improve the noise immunity and accuracy of the IF extraction in a noisy environment. Note that the SST-MAW reconstructs a complex trace to extract seismic IF. To demonstrate the effectiveness of the proposed method, we apply the SST-MAW to synthetic data and field seismic data. Numerical experiments suggest that the proposed procedure yields the higher resolution and the better anti-noise performance compared to the conventional IF extraction methods based on the HT method and continuous wavelet transform. Moreover, geological features (such as the channels) are well characterized, which is insightful for further oil/gas reservoir identification.

  7. Quantitative Analysis of Fullerene Nanomaterials in Environmental Systems: A Critical Review

    PubMed Central

    Isaacson, Carl W.; Kleber, Markus; Field, Jennifer A.

    2009-01-01

    The increasing production and use of fullerene nanomaterials has led to calls for more information regarding the potential impacts that releases of these materials may have on human and environmental health. Fullerene nanomaterials, which are comprised of both fullerenes and surface-functionalized fullerenes, are used in electronic, optic, medical and cosmetic applications. Measuring fullerene nanomaterial concentrations in natural environments is difficult because they exhibit a duality of physical and chemical characteristics as they transition from hydrophobic to polar forms upon exposure to water. In aqueous environments, this is expressed as their tendency to initially (i) self assemble into aggregates of appreciable size and hydrophobicity, and subsequently (ii) interact with the surrounding water molecules and other chemical constituents in natural environments thereby acquiring negative surface charge. Fullerene nanomaterials may therefore deceive the application of any single analytical method that is applied with the assumption that fullerenes have but one defining characteristic (e.g., hydrophobicity). [1] We find that analytical procedures are needed to account for the potentially transitory nature of fullerenes in natural environments through the use of approaches that provide chemically-explicit information including molecular weight and the number and identity of surface functional groups. [2] We suggest that sensitive and mass-selective detection, such as that offered by mass spectrometry when combined with optimized extraction procedures, offers the greatest potential to achieve this goal. [3] With this review, we show that significant improvements in analytical rigor would result from an increased availability of well characterized authentic standards, reference materials, and isotopically-labeled internal standards. Finally, the benefits of quantitative and validated analytical methods for advancing the knowledge on fullerene occurrence, fate, and behavior are indicated. PMID:19764203

  8. The iFlow modelling framework v2.4: a modular idealized process-based model for flow and transport in estuaries

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Brouwer, Ronald L.; Schuttelaars, Henk M.; Schramkowski, George P.

    2017-07-01

    The iFlow modelling framework is a width-averaged model for the systematic analysis of the water motion and sediment transport processes in estuaries and tidal rivers. The distinctive solution method, a mathematical perturbation method, used in the model allows for identification of the effect of individual physical processes on the water motion and sediment transport and study of the sensitivity of these processes to model parameters. This distinction between processes provides a unique tool for interpreting and explaining hydrodynamic interactions and sediment trapping. iFlow also includes a large number of options to configure the model geometry and multiple choices of turbulence and salinity models. Additionally, the model contains auxiliary components, including one that facilitates easy and fast sensitivity studies. iFlow has a modular structure, which makes it easy to include, exclude or change individual model components, called modules. Depending on the required functionality for the application at hand, modules can be selected to construct anything from very simple quasi-linear models to rather complex models involving multiple non-linear interactions. This way, the model complexity can be adjusted to the application. Once the modules containing the required functionality are selected, the underlying model structure automatically ensures modules are called in the correct order. The model inserts iteration loops over groups of modules that are mutually dependent. iFlow also ensures a smooth coupling of modules using analytical and numerical solution methods. This way the model combines the speed and accuracy of analytical solutions with the versatility of numerical solution methods. In this paper we present the modular structure, solution method and two examples of the use of iFlow. In the examples we present two case studies, of the Yangtze and Scheldt rivers, demonstrating how iFlow facilitates the analysis of model results, the understanding of the underlying physics and the testing of parameter sensitivity. A comparison of the model results to measurements shows a good qualitative agreement. iFlow is written in Python and is available as open source code under the LGPL license.

  9. Use of the Threshold of Toxicological Concern (TTC) approach for deriving target values for drinking water contaminants.

    PubMed

    Mons, M N; Heringa, M B; van Genderen, J; Puijker, L M; Brand, W; van Leeuwen, C J; Stoks, P; van der Hoek, J P; van der Kooij, D

    2013-03-15

    Ongoing pollution and improving analytical techniques reveal more and more anthropogenic substances in drinking water sources, and incidentally in treated water as well. In fact, complete absence of any trace pollutant in treated drinking water is an illusion as current analytical techniques are capable of detecting very low concentrations. Most of the substances detected lack toxicity data to derive safe levels and have not yet been regulated. Although the concentrations in treated water usually do not have adverse health effects, their presence is still undesired because of customer perception. This leads to the question how sensitive analytical methods need to become for water quality screening, at what levels water suppliers need to take action and how effective treatment methods need to be designed to remove contaminants sufficiently. Therefore, in the Netherlands a clear and consistent approach called 'Drinking Water Quality for the 21st century (Q21)' has been developed within the joint research program of the drinking water companies. Target values for anthropogenic drinking water contaminants were derived by using the recently introduced Threshold of Toxicological Concern (TTC) approach. The target values for individual genotoxic and steroid endocrine chemicals were set at 0.01 μg/L. For all other organic chemicals the target values were set at 0.1 μg/L. The target value for the total sum of genotoxic chemicals, the total sum of steroid hormones and the total sum of all other organic compounds were set at 0.01, 0.01 and 1.0 μg/L, respectively. The Dutch Q21 approach is further supplemented by the standstill-principle and effect-directed testing. The approach is helpful in defining the goals and limits of future treatment process designs and of analytical methods to further improve and ensure the quality of drinking water, without going to unnecessary extents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.

  11. The Geek Perspective: Answering the Call for Advanced Technology in Research Inquiry Related to Pediatric Brain Injury and Motor Disability.

    PubMed

    Wininger, Michael; Pidcoe, Peter

    2017-10-01

    The Academy of Pediatric Physical Therapy Research Summit IV issued a Call to Action for community-wide intensification of a research enterprise in inquiries related to pediatric brain injury and motor disability by way of technological integration. But the barriers can seem high, and the pathways to integrative clinical research can seem poorly marked. Here, we answer the Call by providing framework to 3 objectives: (1) instrumentation, (2) biometrics and study design, and (3) data analytics. We identify emergent cases where this Call has been answered and advocate for others to echo the Call both in highly visible physical therapy venues and in forums where the audience is diverse.

  12. Continuum description of solvent dielectrics in molecular-dynamics simulations of proteins

    NASA Astrophysics Data System (ADS)

    Egwolf, Bernhard; Tavan, Paul

    2003-02-01

    We present a continuum approach for efficient and accurate calculation of reaction field forces and energies in classical molecular-dynamics (MD) simulations of proteins in water. The derivation proceeds in two steps. First, we reformulate the electrostatics of an arbitrarily shaped molecular system, which contains partially charged atoms and is embedded in a dielectric continuum representing the water. A so-called fuzzy partition is used to exactly decompose the system into partial atomic volumes. The reaction field is expressed by means of dipole densities localized at the atoms. Since these densities cannot be calculated analytically for general systems, we introduce and carefully analyze a set of approximations in a second step. These approximations allow us to represent the dipole densities by simple dipoles localized at the atoms. We derive a system of linear equations for these dipoles, which can be solved numerically by iteration. After determining the two free parameters of our approximate method we check its quality by comparisons (i) with an analytical solution, which is available for a perfectly spherical system, (ii) with forces obtained from a MD simulation of a soluble protein in water, and (iii) with reaction field energies of small molecules calculated by a finite difference method.

  13. Higher order alchemical derivatives from coupled perturbed self-consistent field theory.

    PubMed

    Lesiuk, Michał; Balawender, Robert; Zachara, Janusz

    2012-01-21

    We present an analytical approach to treat higher order derivatives of Hartree-Fock (HF) and Kohn-Sham (KS) density functional theory energy in the Born-Oppenheimer approximation with respect to the nuclear charge distribution (so-called alchemical derivatives). Modified coupled perturbed self-consistent field theory is used to calculate molecular systems response to the applied perturbation. Working equations for the second and the third derivatives of HF/KS energy are derived. Similarly, analytical forms of the first and second derivatives of orbital energies are reported. The second derivative of Kohn-Sham energy and up to the third derivative of Hartree-Fock energy with respect to the nuclear charge distribution were calculated. Some issues of practical calculations, in particular the dependence of the basis set and Becke weighting functions on the perturbation, are considered. For selected series of isoelectronic molecules values of available alchemical derivatives were computed and Taylor series expansion was used to predict energies of the "surrounding" molecules. Predicted values of energies are in unexpectedly good agreement with the ones computed using HF/KS methods. Presented method allows one to predict orbital energies with the error less than 1% or even smaller for valence orbitals. © 2012 American Institute of Physics

  14. Analytical dual-energy microtomography: A new method for obtaining three-dimensional mineral phase images and its application to Hayabusa samples

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, A.; Nakano, T.; Uesugi, K.; Uesugi, M.; Takeuchi, A.; Suzuki, Y.; Noguchi, R.; Matsumoto, T.; Matsuno, J.; Nagano, T.; Imai, Y.; Nakamura, T.; Ogami, T.; Noguchi, T.; Abe, M.; Yada, T.; Fujimura, A.

    2013-09-01

    We developed a novel technique called "analytical dual-energy microtomography" that uses the linear attenuation coefficients (LACs) of minerals at two different X-ray energies to nondestructively obtain three-dimensional (3D) images of mineral distribution in materials such as rock specimens. The two energies are above and below the absorption edge energy of an abundant element, which we call the "index element". The chemical compositions of minerals forming solid solution series can also be measured. The optimal size of a sample is of the order of the inverse of the LAC values at the X-ray energies used. We used synchrotron-based microtomography with an effective spatial resolution of >200 nm to apply this method to small particles (30-180 μm) collected from the surface of asteroid 25143 Itokawa by the Hayabusa mission of the Japan Aerospace Exploration Agency (JAXA). A 3D distribution of the minerals was successively obtained by imaging the samples at X-ray energies of 7 and 8 keV, using Fe as the index element (the K-absorption edge of Fe is 7.11 keV). The optimal sample size in this case is of the order of 50 μm. The chemical compositions of the minerals, including the Fe/Mg ratios of ferromagnesian minerals and the Na/Ca ratios of plagioclase, were measured. This new method is potentially applicable to other small samples such as cosmic dust, lunar regolith, cometary dust (recovered by the Stardust mission of the National Aeronautics and Space Administration [NASA]), and samples from extraterrestrial bodies (those from future sample return missions such as the JAXA Hayabusa2 mission and the NASA OSIRIS-REx mission), although limitations exist for unequilibrated samples. Further, this technique is generally suited for studying materials in multicomponent systems with multiple phases across several research fields.

  15. Sensory politics: The tug-of-war between potability and palatability in municipal water production.

    PubMed

    Spackman, Christy; Burlingame, Gary A

    2018-06-01

    Sensory information signaled the acceptability of water for consumption for lay and professional people into the early twentieth century. Yet as the twentieth century progressed, professional efforts to standardize water-testing methods have increasingly excluded aesthetic information, preferring to rely on the objectivity of analytic information. Despite some highly publicized exceptions, consumer complaints remain peripheral to the making and regulating of drinking water. This exclusion is often attributed to the unreliability of the human senses in detecting danger. However, technical discussions among water professionals during the twentieth century suggest that this exclusion is actually due to sensory politics, the institutional and regulatory practices of inclusion or exclusion of sensory knowledge from systems of action. Water workers developed and turned to standardized analytical methods for detecting chemical and microbiological contaminants, and more recently sensory contaminants, a process that attempted to mitigate the unevenness of human sensing. In so doing, they created regimes of perception that categorized consumer sensory knowledge as aesthetic. By siloing consumers' sensory knowledge about water quality into the realm of the aesthetic instead of accommodating it in the analytic, the regimes of perception implemented during the twentieth century to preserve health have marginalized subjective experiences. Discounting the human experience with municipal water as irrelevant to its quality, control and regulation is out of touch with its intended use as an ingestible, and calls for new practices that engage consumers as valuable participants.

  16. Metal-amplified Density Assays, (MADAs), including a Density-Linked Immunosorbent Assay (DeLISA).

    PubMed

    Subramaniam, Anand Bala; Gonidec, Mathieu; Shapiro, Nathan D; Kresse, Kayleigh M; Whitesides, George M

    2015-02-21

    This paper reports the development of Metal-amplified Density Assays, or MADAs - a method of conducting quantitative or multiplexed assays, including immunoassays, by using Magnetic Levitation (MagLev) to measure metal-amplified changes in the density of beads labeled with biomolecules. The binding of target analytes (i.e. proteins, antibodies, antigens) to complementary ligands immobilized on the surface of the beads, followed by a chemical amplification of the binding in a form that results in a change in the density of the beads (achieved by using gold nanoparticle-labeled biomolecules, and electroless deposition of gold or silver), translates analyte binding events into changes in density measureable using MagLev. A minimal model based on diffusion-limited growth of hemispherical nuclei on a surface reproduces the dynamics of the assay. A MADA - when performed with antigens and antibodies - is called a Density-Linked Immunosorbent Assay, or DeLISA. Two immunoassays provided a proof of principle: a competitive quantification of the concentration of neomycin in whole milk, and a multiplexed detection of antibodies against Hepatitis C virus NS3 protein and syphilis T. pallidum p47 protein in serum. MADAs, including DeLISAs, require, besides the requisite biomolecules and amplification reagents, minimal specialized equipment (two permanent magnets, a ruler or a capillary with calibrated length markings) and no electrical power to obtain a quantitative readout of analyte concentration. With further development, the method may be useful in resource-limited or point-of-care settings.

  17. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    NASA Astrophysics Data System (ADS)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  18. Non-idealities in the 3ω method for thermal characterization in the low- and high-frequency regimes

    NASA Astrophysics Data System (ADS)

    Jaber, Wassim; Chapuis, Pierre-Olivier

    2018-04-01

    This work is devoted to analytical and numerical studies of diffusive heat conduction in configurations considered in 3ω experiments, which aim at measuring thermal conductivity of materials. The widespread 2D analytical model considers infinite media and translational invariance, a situation which cannot be met in practice in numerous cases due to the constraints in low-dimensional materials and systems. We investigate how thermal boundary resistance between heating wire and sample, native oxide and heating wire shape affect the temperature fields. 3D finite element modelling is also performed to account for the effect of the bonding pads and the 3D heat spreading down to a typical package. Emphasis is given on the low-frequency regime, which is less known than the so-called slope regime. These results will serve as guides for the design of ideal experiments where the 2D model can be applied and for the analyses of non-ideal ones.

  19. Monitoring Cosmic Radiation Risk: Comparisons between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-01-01

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6

  20. Monitoring Cosmic Radiation Risk: Comparisons Between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-07-05

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA

  1. The effect on cadaver blood DNA identification by the use of targeted and whole body post-mortem computed tomography angiography.

    PubMed

    Rutty, Guy N; Barber, Jade; Amoroso, Jasmin; Morgan, Bruno; Graham, Eleanor A M

    2013-12-01

    Post-mortem computed tomography angiography (PMCTA) involves the injection of contrast agents. This could have both a dilution effect on biological fluid samples and could affect subsequent post-contrast analytical laboratory processes. We undertook a small sample study of 10 targeted and 10 whole body PMCTA cases to consider whether or not these two methods of PMCTA could affect post-PMCTA cadaver blood based DNA identification. We used standard methodology to examine DNA from blood samples obtained before and after the PMCTA procedure. We illustrate that neither of these PMCTA methods had an effect on the alleles called following short tandem repeat based DNA profiling, and therefore the ability to undertake post-PMCTA blood based DNA identification.

  2. Quality Tetrahedral Mesh Smoothing via Boundary-Optimized Delaunay Triangulation

    PubMed Central

    Gao, Zhanheng; Yu, Zeyun; Holst, Michael

    2012-01-01

    Despite its great success in improving the quality of a tetrahedral mesh, the original optimal Delaunay triangulation (ODT) is designed to move only inner vertices and thus cannot handle input meshes containing “bad” triangles on boundaries. In the current work, we present an integrated approach called boundary-optimized Delaunay triangulation (B-ODT) to smooth (improve) a tetrahedral mesh. In our method, both inner and boundary vertices are repositioned by analytically minimizing the error between a paraboloid function and its piecewise linear interpolation over the neighborhood of each vertex. In addition to the guaranteed volume-preserving property, the proposed algorithm can be readily adapted to preserve sharp features in the original mesh. A number of experiments are included to demonstrate the performance of our method. PMID:23144522

  3. Electronic properties of a molecular system with Platinum

    NASA Astrophysics Data System (ADS)

    Ojeda, J. H.; Medina, F. G.; Becerra-Alonso, David

    2017-10-01

    The electronic properties are studied using a finite homogeneous molecule called Trans-platinum-linked oligo(tetraethenylethenes). This system is composed of individual molecules such as benzene rings, platinum, Phosphore and Sulfur. The mechanism for the study of the electron transport through this system is based on placing the molecule between metal contacts to control the current through the molecular system. We study this molecule based on the tight-binding approach for the calculation of the transport properties using the Landauer-Büttiker formalism and the Fischer-Lee relationship, based on a semi-analytic Green's function method within a real-space renormalization approach. Our results show a significant agreement with experimental measurements.

  4. Black-Scholes finite difference modeling in forecasting of call warrant prices in Bursa Malaysia

    NASA Astrophysics Data System (ADS)

    Mansor, Nur Jariah; Jaffar, Maheran Mohd

    2014-07-01

    Call warrant is a type of structured warrant in Bursa Malaysia. It gives the holder the right to buy the underlying share at a specified price within a limited period of time. The issuer of the structured warrants usually uses European style to exercise the call warrant on the maturity date. Warrant is very similar to an option. Usually, practitioners of the financial field use Black-Scholes model to value the option. The Black-Scholes equation is hard to solve analytically. Therefore the finite difference approach is applied to approximate the value of the call warrant prices. The central in time and central in space scheme is produced to approximate the value of the call warrant prices. It allows the warrant holder to forecast the value of the call warrant prices before the expiry date.

  5. Muver, a computational framework for accurately calling accumulated mutations.

    PubMed

    Burkholder, Adam B; Lujan, Scott A; Lavender, Christopher A; Grimm, Sara A; Kunkel, Thomas A; Fargo, David C

    2018-05-09

    Identification of mutations from next-generation sequencing data typically requires a balance between sensitivity and accuracy. This is particularly true of DNA insertions and deletions (indels), that can impart significant phenotypic consequences on cells but are harder to call than substitution mutations from whole genome mutation accumulation experiments. To overcome these difficulties, we present muver, a computational framework that integrates established bioinformatics tools with novel analytical methods to generate mutation calls with the extremely low false positive rates and high sensitivity required for accurate mutation rate determination and comparison. Muver uses statistical comparison of ancestral and descendant allelic frequencies to identify variant loci and assigns genotypes with models that include per-sample assessments of sequencing errors by mutation type and repeat context. Muver identifies maximally parsimonious mutation pathways that connect these genotypes, differentiating potential allelic conversion events and delineating ambiguities in mutation location, type, and size. Benchmarking with a human gold standard father-son pair demonstrates muver's sensitivity and low false positive rates. In DNA mismatch repair (MMR) deficient Saccharomyces cerevisiae, muver detects multi-base deletions in homopolymers longer than the replicative polymerase footprint at rates greater than predicted for sequential single-base deletions, implying a novel multi-repeat-unit slippage mechanism. Benchmarking results demonstrate the high accuracy and sensitivity achieved with muver, particularly for indels, relative to available tools. Applied to an MMR-deficient Saccharomyces cerevisiae system, muver mutation calls facilitate mechanistic insights into DNA replication fidelity.

  6. Mapping the distribution of materials in hyperspectral data using the USGS Material Identification and Characterization Algorithm (MICA)

    USGS Publications Warehouse

    Kokaly, R.F.; King, T.V.V.; Hoefen, T.M.

    2011-01-01

    Identifying materials by measuring and analyzing their reflectance spectra has been an important method in analytical chemistry for decades. Airborne and space-based imaging spectrometers allow scientists to detect materials and map their distributions across the landscape. With new satellite-borne hyperspectral sensors planned for the future, for example, HYSPIRI (HYPerspectral InfraRed Imager), robust methods are needed to fully exploit the information content of hyperspectral remote sensing data. A method of identifying and mapping materials using spectral-feature based analysis of reflectance data in an expert-system framework called MICA (Material Identification and Characterization Algorithm) is described in this paper. The core concepts and calculations of MICA are presented. A MICA command file has been developed and applied to map minerals in the full-country coverage of the 2007 Afghanistan HyMap hyperspectral data. ?? 2011 IEEE.

  7. Implementation and verification of nuclear interactions in a Monte-Carlo code for the Procom-ProGam proton therapy planning system

    NASA Astrophysics Data System (ADS)

    Kostyuchenko, V. I.; Makarova, A. S.; Ryazantsev, O. B.; Samarin, S. I.; Uglov, A. S.

    2014-06-01

    A great breakthrough in proton therapy has happened in the new century: several tens of dedicated centers are now operated throughout the world and their number increases every year. An important component of proton therapy is a treatment planning system. To make calculations faster, these systems usually use analytical methods whose reliability and accuracy do not allow the advantages of this method of treatment to implement to the full extent. Predictions by the Monte Carlo (MC) method are a "gold" standard for the verification of calculations with these systems. At the Institute of Experimental and Theoretical Physics (ITEP) which is one of the eldest proton therapy centers in the world, an MC code is an integral part of their treatment planning system. This code which is called IThMC was developed by scientists from RFNC-VNIITF (Snezhinsk) under ISTC Project 3563.

  8. Application of Gauss's law space-charge limited emission model in iterative particle tracking method

    NASA Astrophysics Data System (ADS)

    Altsybeyev, V. V.; Ponomarev, V. A.

    2016-11-01

    The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.

  9. Comparing sequencing assays and human-machine analyses in actionable genomics for glioblastoma

    PubMed Central

    Wrzeszczynski, Kazimierz O.; Frank, Mayu O.; Koyama, Takahiko; Rhrissorrakrai, Kahn; Robine, Nicolas; Utro, Filippo; Emde, Anne-Katrin; Chen, Bo-Juen; Arora, Kanika; Shah, Minita; Vacic, Vladimir; Norel, Raquel; Bilal, Erhan; Bergmann, Ewa A.; Moore Vogel, Julia L.; Bruce, Jeffrey N.; Lassman, Andrew B.; Canoll, Peter; Grommes, Christian; Harvey, Steve; Parida, Laxmi; Michelini, Vanessa V.; Zody, Michael C.; Jobanputra, Vaidehi; Royyuru, Ajay K.

    2017-01-01

    Objective: To analyze a glioblastoma tumor specimen with 3 different platforms and compare potentially actionable calls from each. Methods: Tumor DNA was analyzed by a commercial targeted panel. In addition, tumor-normal DNA was analyzed by whole-genome sequencing (WGS) and tumor RNA was analyzed by RNA sequencing (RNA-seq). The WGS and RNA-seq data were analyzed by a team of bioinformaticians and cancer oncologists, and separately by IBM Watson Genomic Analytics (WGA), an automated system for prioritizing somatic variants and identifying drugs. Results: More variants were identified by WGS/RNA analysis than by targeted panels. WGA completed a comparable analysis in a fraction of the time required by the human analysts. Conclusions: The development of an effective human-machine interface in the analysis of deep cancer genomic datasets may provide potentially clinically actionable calls for individual patients in a more timely and efficient manner than currently possible. ClinicalTrials.gov identifier: NCT02725684. PMID:28740869

  10. Measurement equivalence and differential item functioning in family psychology.

    PubMed

    Bingenheimer, Jeffrey B; Raudenbush, Stephen W; Leventhal, Tama; Brooks-Gunn, Jeanne

    2005-09-01

    Several hypotheses in family psychology involve comparisons of sociocultural groups. Yet the potential for cross-cultural inequivalence in widely used psychological measurement instruments threatens the validity of inferences about group differences. Methods for dealing with these issues have been developed via the framework of item response theory. These methods deal with an important type of measurement inequivalence, called differential item functioning (DIF). The authors introduce DIF analytic methods, linking them to a well-established framework for conceptualizing cross-cultural measurement equivalence in psychology (C.H. Hui and H.C. Triandis, 1985). They illustrate the use of DIF methods using data from the Project on Human Development in Chicago Neighborhoods (PHDCN). Focusing on the Caregiver Warmth and Environmental Organization scales from the PHDCN's adaptation of the Home Observation for Measurement of the Environment Inventory, the authors obtain results that exemplify the range of outcomes that may result when these methods are applied to psychological measurement instruments. (c) 2005 APA, all rights reserved

  11. A Comparison of the Bounded Derivative and the Normal Mode Initialization Methods Using Real Data

    NASA Technical Reports Server (NTRS)

    Semazzi, F. H. M.; Navon, I. M.

    1985-01-01

    Browning et al. (1980) proposed an initialization method called the bounded derivative method (BDI). They used analytical data to test the new method. Kasahara (1982) theoretically demonstrated the equivalence between BDI and the well known nonlinear normal mode initialization method (NMI). The purposes of this study are the extension of the application of BDI to real data and comparison with NMI. The unbalanced initial state (UBD) is data of January, 1979 OOZ which were interpolated from the adjacent sigma levels of the GLAS GCM to the 300 mb surface. The global barotropic model described by Takacs and Balgovind (1983) is used. Orographic forcing is explicitly included in the model. Many comparisons are performed between various quantities. However, we only present a comparison of the time evolution at two grid points A(50 S, 90 E) and B(10 S, 20 E) which represent low and middle latitude locations. To facilitate a more complete comparison an initialization experiment based on the classical balance equation (CBE) was also included.

  12. A Review of Interface Electronic Systems for AT-cut Quartz Crystal Microbalance Applications in Liquids

    PubMed Central

    Arnau, Antonio

    2008-01-01

    From the first applications of AT-cut quartz crystals as sensors in solutions more than 20 years ago, the so-called quartz crystal microbalance (QCM) sensor is becoming into a good alternative analytical method in a great deal of applications such as biosensors, analysis of biomolecular interactions, study of bacterial adhesion at specific interfaces, pathogen and microorganism detection, study of polymer film-biomolecule or cell-substrate interactions, immunosensors and an extensive use in fluids and polymer characterization and electrochemical applications among others. The appropriate evaluation of this analytical method requires recognizing the different steps involved and to be conscious of their importance and limitations. The first step involved in a QCM system is the accurate and appropriate characterization of the sensor in relation to the specific application. The use of the piezoelectric sensor in contact with solutions strongly affects its behavior and appropriate electronic interfaces must be used for an adequate sensor characterization. Systems based on different principles and techniques have been implemented during the last 25 years. The interface selection for the specific application is important and its limitations must be known to be conscious of its suitability, and for avoiding the possible error propagation in the interpretation of results. This article presents a comprehensive overview of the different techniques used for AT-cut quartz crystal microbalance in in-solution applications, which are based on the following principles: network or impedance analyzers, decay methods, oscillators and lock-in techniques. The electronic interfaces based on oscillators and phase-locked techniques are treated in detail, with the description of different configurations, since these techniques are the most used in applications for detection of analytes in solutions, and in those where a fast sensor response is necessary. PMID:27879713

  13. Optimization of classification and regression analysis of four monoclonal antibodies from Raman spectra using collaborative machine learning approach.

    PubMed

    Le, Laetitia Minh Maï; Kégl, Balázs; Gramfort, Alexandre; Marini, Camille; Nguyen, David; Cherti, Mehdi; Tfaili, Sana; Tfayli, Ali; Baillet-Guffroy, Arlette; Prognon, Patrice; Chaminade, Pierre; Caudron, Eric

    2018-07-01

    The use of monoclonal antibodies (mAbs) constitutes one of the most important strategies to treat patients suffering from cancers such as hematological malignancies and solid tumors. These antibodies are prescribed by the physician and prepared by hospital pharmacists. An analytical control enables the quality of the preparations to be ensured. The aim of this study was to explore the development of a rapid analytical method for quality control. The method used four mAbs (Infliximab, Bevacizumab, Rituximab and Ramucirumab) at various concentrations and was based on recording Raman data and coupling them to a traditional chemometric and machine learning approach for data analysis. Compared to conventional linear approach, prediction errors are reduced with a data-driven approach using statistical machine learning methods. In the latter, preprocessing and predictive models are jointly optimized. An additional original aspect of the work involved on submitting the problem to a collaborative data challenge platform called Rapid Analytics and Model Prototyping (RAMP). This allowed using solutions from about 300 data scientists in collaborative work. Using machine learning, the prediction of the four mAbs samples was considerably improved. The best predictive model showed a combined error of 2.4% versus 14.6% using linear approach. The concentration and classification errors were 5.8% and 0.7%, only three spectra were misclassified over the 429 spectra of the test set. This large improvement obtained with machine learning techniques was uniform for all molecules but maximal for Bevacizumab with an 88.3% reduction on combined errors (2.1% versus 17.9%). Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Biases and power for groups comparison on subjective health measurements.

    PubMed

    Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique

    2012-01-01

    Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald's test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative.

  15. Design of analytical systems based on functionality of doped ice.

    PubMed

    Okada, Tetsuo

    2014-01-01

    Ice plays an important role for the circulations of some compounds in the global environment. Both the ice surface and the liquid phase developed in a frozen solution are involved in such reactions of the molecules of environmental importance. This leads to the idea that ice can be used to design novel analytical reaction systems. We devised ice chromatography, in which ice particles are used as the liquid chromatographic stationary phase, and have subsequently developed various analytical systems utilizing the functionality of ice. This review focuses our attention on the analytical facets of ice containing impurities such as salts; hereinafter, we call this "doped ice". The design of novel separation systems and use as microreactors with doped ice are mainly discussed.

  16. Theoretical and experimental evidence of Fano-like resonances in simple monomode photonic circuits

    NASA Astrophysics Data System (ADS)

    Mouadili, A.; El Boudouti, E. H.; Soltani, A.; Talbi, A.; Akjouj, A.; Djafari-Rouhani, B.

    2013-04-01

    A simple photonic device consisting of two dangling side resonators grafted at two sites on a waveguide is designed in order to obtain sharp resonant states inside the transmission gaps without introducing any defects in the structure. This results from an internal resonance of the structure when such a resonance is situated in the vicinity of a zero of transmission or placed between two zeros of transmission, the so-called Fano resonances. A general analytical expression for the transmission coefficient is given for various systems of this kind. The amplitude of the transmission is obtained following the Fano form. The full width at half maximum of the resonances as well as the asymmetric Fano parameter are discussed explicitly as function of the geometrical parameters of the system. In addition to the usual asymmetric Fano resonance, we show that this system may exhibit an electromagnetic induced transparency resonance as well as well as a particular case where such resonances collapse in the transmission coefficient. Also, we give a comparison between the phase of the determinant of the scattering matrix, the so-called Friedel phase, and the phase of the transmission amplitude. The analytical results are obtained by means of the Green's function method, whereas the experiments are carried out using coaxial cables in the radio-frequency regime. These results should have important consequences for designing integrated devices such as narrow-frequency optical or microwave filters and high-speed switches. This system is proposed as a simpler alternative to coupled-micoresonators.

  17. Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres

    NASA Astrophysics Data System (ADS)

    Liu, Quanhua; Weng, Fuzhong

    2006-12-01

    The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.

  18. Modeling of classical swirl injector dynamics

    NASA Astrophysics Data System (ADS)

    Ismailov, Maksud M.

    The knowledge of the dynamics of a swirl injector is crucial in designing a stable liquid rocket engine. Since the swirl injector is a complex fluid flow device in itself, not much work has been conducted to describe its dynamics either analytically or by using computational fluid dynamics techniques. Even the experimental observation is limited up to date. Thus far, there exists an analytical linear theory by Bazarov [1], which is based on long-wave disturbances traveling on the free surface of the injector core. This theory does not account for variation of the nozzle reflection coefficient as a function of disturbance frequency, and yields a response function which is strongly dependent on the so called artificial viscosity factor. This causes an uncertainty in designing an injector for the given operational combustion instability frequencies in the rocket engine. In this work, the author has studied alternative techniques to describe the swirl injector response, both analytically and computationally. In the analytical part, by using the linear small perturbation analysis, the entire phenomenon of unsteady flow in swirl injectors is dissected into fundamental components, which are the phenomena of disturbance wave refraction and reflection, and vortex chamber resonance. This reveals the nature of flow instability and the driving factors leading to maximum injector response. In the computational part, by employing the nonlinear boundary element method (BEM), the author sets the boundary conditions such that they closely simulate those in the analytical part. The simulation results then show distinct peak responses at frequencies that are coincident with those resonant frequencies predicted in the analytical part. Moreover, a cold flow test of the injector related to this study also shows a clear growth of instability with its maximum amplitude at the first fundamental frequency predicted both by analytical methods and BEM. It shall be noted however that Bazarov's theory does not predict the resonant peaks. Overall this methodology provides clearer understanding of the injector dynamics compared to Bazarov's. Even though the exact value of response is not possible to obtain at this stage of theoretical, computational, and experimental investigation, this methodology sets the starting point from where the theoretical description of reflection/refraction, resonance, and their interaction between each other may be refined to higher order to obtain its more precise value.

  19. An analysis of hypercritical states in elastic and inelastic systems

    NASA Astrophysics Data System (ADS)

    Kowalczk, Maciej

    The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.

  20. The Case for Adopting Server-side Analytics

    NASA Astrophysics Data System (ADS)

    Tino, C.; Holmes, C. P.; Feigelson, E.; Hurlburt, N. E.

    2017-12-01

    The standard method for accessing Earth and space science data relies on a scheme developed decades ago: data residing in one or many data stores must be parsed out and shipped via internet lines or physical transport to the researcher who in turn locally stores the data for analysis. The analyses tasks are varied and include visualization, parameterization, and comparison with or assimilation into physics models. In many cases this process is inefficient and unwieldy as the data sets become larger and demands on the analysis tasks become more sophisticated and complex. For about a decade, several groups have explored a new paradigm to this model. The names applied to the paradigm include "data analytics", "climate analytics", and "server-side analytics". The general concept is that in close network proximity to the data store there will be a tailored processing capability appropriate to the type and use of the data served. The user of the server-side analytics will operate on the data with numerical procedures. The procedures can be accessed via canned code, a scripting processor, or an analysis package such as Matlab, IDL or R. Results of the analytics processes will then be relayed via the internet to the user. In practice, these results will be at a much lower volume, easier to transport to and store locally by the user and easier for the user to interoperate with data sets from other remote data stores. The user can also iterate on the processing call to tailor the results as needed. A major component of server-side analytics could be to provide sets of tailored results to end users in order to eliminate the repetitive preconditioning that is both often required with these data sets and which drives much of the throughput challenges. NASA's Big Data Task Force studied this issue. This paper will present the results of this study including examples of SSAs that are being developed and demonstrated and suggestions for architectures that might be developed for future applications.

  1. Photooxidation of 3-substituted pyrroles:  a postcolumn reaction detection system for singlet molecular oxygen in HPLC.

    PubMed

    Denham, K; Milofsky, R E

    1998-10-01

    A postcolumn photochemical reaction detection scheme, based on the reaction of 3-substituted pyrroles with singlet molecular oxygen ((1)O(2)), has been developed. The method is selective and sensitive for the determination of a class of organic compounds called (1)O(2)-sensitizers and is readily coupled to HPLC. Following separation by HPLC, analytes ((1)O(2)-sensitizers) are excited by a Hg pen-ray lamp. Analytes that are efficient (1)O(2)-sensitizers promote ground-state O(2) ((3)Σ(g)(-)) to an excited state ((1)Σ(g)(+) or (1)Δ(g)), which reacts rapidly with tert-butyl-3,4,5-trimethylpyrrolecarboxylate (BTMPC) or N-benzyl-3-methoxypyrrole-2-tert-carboxylate (BMPC), which is added to the mobile phase. Detection is based on the loss of pyrrole (BTMPC or BMPC). The reaction is catalytic in nature since one analyte molecule may absorb light many times, producing large amounts of (1)O(2). Detection limits for several (1)O(2)-sensitizers were improved by 1-2 orders of magnitude over optimized UV-absorbance detection. This paper discusses the optimization of the reaction conditions for this photochemical reaction detection scheme and its application to the detection of PCBs, nitrogen heterocycles, nitro and chloro aromatics, and other substituted aromatic compounds.

  2. Systems-Level Annotation of a Metabolomics Data Set Reduces 25 000 Features to Fewer than 1000 Unique Metabolites.

    PubMed

    Mahieu, Nathaniel G; Patti, Gary J

    2017-10-03

    When using liquid chromatography/mass spectrometry (LC/MS) to perform untargeted metabolomics, it is now routine to detect tens of thousands of features from biological samples. Poor understanding of the data, however, has complicated interpretation and masked the number of unique metabolites actually being measured in an experiment. Here we place an upper bound on the number of unique metabolites detected in Escherichia coli samples analyzed with one untargeted metabolomics method. We first group multiple features arising from the same analyte, which we call "degenerate features", using a context-driven annotation approach. Surprisingly, this analysis revealed thousands of previously unreported degeneracies that reduced the number of unique analytes to ∼2961. We then applied an orthogonal approach to remove nonbiological features from the data using the 13 C-based credentialing technology. This further reduced the number of unique analytes to less than 1000. Our 90% reduction in data is 5-fold greater than previously published studies. On the basis of the results, we propose an alternative approach to untargeted metabolomics that relies on thoroughly annotated reference data sets. To this end, we introduce the creDBle database ( http://creDBle.wustl.edu ), which contains accurate mass, retention time, and MS/MS fragmentation data as well as annotations of all credentialed features.

  3. Predicting Student Success using Analytics in Course Learning Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Thakur, Gautam; McNair, Wade

    Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems,more » called Moodle. First, we have identified the data features useful for predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.« less

  4. The evaluation and enhancement of quality, environmental protection and seaport safety by using FAHP

    NASA Astrophysics Data System (ADS)

    Tadic, Danijela; Aleksic, Aleksandar; Popovic, Pavle; Arsovski, Slavko; Castelli, Ana; Joksimovic, Danijela; Stefanovic, Miladin

    2017-02-01

    The evaluation and enhancement of business processes in any organization in an uncertain environment presents one of the main requirements of ISO 9000:2008 and has a key effect on competitive advantage and long-term sustainability. The aim of this paper can be defined as the identification and discussion of some of the most important business processes of seaports and the performances of business processes and their key performance indicators (KPIs). The complexity and importance of the treated problem call for analytic methods rather than intuitive decisions. The existing decision variables of the considered problem are described by linguistic expressions which are modelled by triangular fuzzy numbers (TFNs). In this paper, the modified fuzzy extended analytic hierarchy process (FAHP) is proposed. The assessment of the relative importance of each pair of performances and their key performance indicators are stated as a fuzzy group decision-making problem. By using the modified fuzzy extended analytic hierarchy process, the fuzzy rank of business processes of a seaport is obtained. The model is tested through an illustrative example with real-life data, where the obtained data suggest measures which should enhance business strategy and improve key performance indicators. The future improvement is based on benchmark and knowledge sharing.

  5. Predicting student success using analytics in course learning management systems

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Thakur, Gautam; McNair, Allen W.; Sukumar, Sreenivas R.

    2014-05-01

    Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for predicting student outcomes such as students' scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.

  6. Lab-on-a-bubble: direct and indirect assays with portable Raman instrumentation

    NASA Astrophysics Data System (ADS)

    Carron, Keith; Schmit, Virginia; Scott, Brandon; Martoglio, Richard

    2012-10-01

    Lab-on-a-Bubble (LoB) is a new method for SERS (Surface Enhanced Raman Scattering) assays that combines separationand concentration of the assay results. A direct LoB assay is comprised of gold nanoparticles coupled directly to the ~30 μm diameter buoyant silica bubble. The direct LoB method was evaluated with cyanide and 5,5'-dithiobis(2-nitrobenzoic acid) (DTNB). An indirect assay uses the same ~ 30 μm diameter buoyant silica bubble and a silica coated SERS reporter. Both the bubble and SERS reporter are coated with a coupling agent for the analyte. The assay measures the amount of SERS reporter coupled to the bubble through a sandwich created by the analyte. The couling agent could consist of an immunological coupling agent (antibody) or a nucleic acid coupling agent (single strand DNA). The indirect LoB method was examined with Cholera toxin (CT) and antibodies against the β subunit. An LOD of ~ 170 pptrillion was measured for cyanide and a limit of detection of 1100 ng was found for CT. Instrumentation for the assay and a novel technique of dynamic SERS (DSERS) will also be discussed. The instrument is a small hand-held Raman device called the CBEx (Chemical Biological Explosive) with a novel raster system to detect heterogeneous or light sensitive materials. DSERS is a mathematical algorithm which eliminates background interference in SERS measurements with colloidal nanoparticles.

  7. Dislocation-induced stress in polycrystalline materials: mesoscopic simulations in the dislocation density formalism

    NASA Astrophysics Data System (ADS)

    Berkov, D. V.; Gorn, N. L.

    2018-06-01

    In this paper we present a simple and effective numerical method which allows a fast Fourier transformation-based evaluation of stress generated by dislocations with arbitrary directions and Burgers vectors if the (site-dependent) dislocation density is known. Our method allows the evaluation of the dislocation stress using a rectangular grid with shape-anisotropic discretization cells without employing higher multipole moments of the dislocation interaction coefficients. Using the proposed method, we first simulate the stress created by relatively simple non-homogeneous distributions of vertical edge and so-called ‘mixed’ dislocations in a disk-shaped sample, which is necessary to understand the dislocation behavior in more complicated systems. The main part of our research is devoted to the stress distribution in polycrystalline layers with the dislocation density rapidly varying with the distance to the layer bottom. Considering GaN as a typical example of such systems, we investigate dislocation-induced stress for edge and mixed dislocations, having random orientations of Burgers vectors among crystal grains. We show that the rapid decay of the dislocation density leads to many highly non-trivial features of the stress distributions in such layers and study in detail the dependence of these features on the average grain size. Finally we develop an analytical approach which allows us to predict the evolution of the stress variance with the grain size and compare analytical predictions with numerical results.

  8. Global open data management in metabolomics.

    PubMed

    Haug, Kenneth; Salek, Reza M; Steinbeck, Christoph

    2017-02-01

    Chemical Biology employs chemical synthesis, analytical chemistry and other tools to study biological systems. Recent advances in both molecular biology such as next generation sequencing (NGS) have led to unprecedented insights towards the evolution of organisms' biochemical repertoires. Because of the specific data sharing culture in Genomics, genomes from all kingdoms of life become readily available for further analysis by other researchers. While the genome expresses the potential of an organism to adapt to external influences, the Metabolome presents a molecular phenotype that allows us to asses the external influences under which an organism exists and develops in a dynamic way. Steady advancements in instrumentation towards high-throughput and highresolution methods have led to a revival of analytical chemistry methods for the measurement and analysis of the metabolome of organisms. This steady growth of metabolomics as a field is leading to a similar accumulation of big data across laboratories worldwide as can be observed in all of the other omics areas. This calls for the development of methods and technologies for handling and dealing with such large datasets, for efficiently distributing them and for enabling re-analysis. Here we describe the recently emerging ecosystem of global open-access databases and data exchange efforts between them, as well as the foundations and obstacles that enable or prevent the data sharing and reanalysis of this data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Estimating Information Processing in a Memory System: The Utility of Meta-analytic Methods for Genetics.

    PubMed

    Yildizoglu, Tugce; Weislogel, Jan-Marek; Mohammad, Farhan; Chan, Edwin S-Y; Assam, Pryseley N; Claridge-Chang, Adam

    2015-12-01

    Genetic studies in Drosophila reveal that olfactory memory relies on a brain structure called the mushroom body. The mainstream view is that each of the three lobes of the mushroom body play specialized roles in short-term aversive olfactory memory, but a number of studies have made divergent conclusions based on their varying experimental findings. Like many fields, neurogenetics uses null hypothesis significance testing for data analysis. Critics of significance testing claim that this method promotes discrepancies by using arbitrary thresholds (α) to apply reject/accept dichotomies to continuous data, which is not reflective of the biological reality of quantitative phenotypes. We explored using estimation statistics, an alternative data analysis framework, to examine published fly short-term memory data. Systematic review was used to identify behavioral experiments examining the physiological basis of olfactory memory and meta-analytic approaches were applied to assess the role of lobular specialization. Multivariate meta-regression models revealed that short-term memory lobular specialization is not supported by the data; it identified the cellular extent of a transgenic driver as the major predictor of its effect on short-term memory. These findings demonstrate that effect sizes, meta-analysis, meta-regression, hierarchical models and estimation methods in general can be successfully harnessed to identify knowledge gaps, synthesize divergent results, accommodate heterogeneous experimental design and quantify genetic mechanisms.

  10. Analytical quality by design: a tool for regulatory flexibility and robust analytics.

    PubMed

    Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).

  11. Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics

    PubMed Central

    Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723

  12. Comparison of veterinary drug residue results in animal tissues by ultrahigh-performance liquid chromatography coupled to triple quadrupole or quadrupole-time-of-flight tandem mass spectrometry after different sample preparation methods, including use of a commercial lipid removal product.

    PubMed

    Anumol, Tarun; Lehotay, Steven J; Stevens, Joan; Zweigenbaum, Jerry

    2017-04-01

    Veterinary drug residues in animal-derived foods must be monitored to ensure food safety, verify proper veterinary practices, enforce legal limits in domestic and imported foods, and for other purposes. A common goal in drug residue analysis in foods is to achieve acceptable monitoring results for as many analytes as possible, with higher priority given to the drugs of most concern, in an efficient and robust manner. The U.S. Department of Agriculture has implemented a multiclass, multi-residue method based on sample preparation using dispersive solid phase extraction (d-SPE) for cleanup and ultrahigh-performance liquid chromatography-tandem quadrupole mass spectrometry (UHPLC-QQQ) for analysis of >120 drugs at regulatory levels of concern in animal tissues. Recently, a new cleanup product called "enhanced matrix removal for lipids" (EMR-L) was commercially introduced that used a unique chemical mechanism to remove lipids from extracts. Furthermore, high-resolution quadrupole-time-of-flight (Q/TOF) for (U)HPLC detection often yields higher selectivity than targeted QQQ analyzers while allowing retroactive processing of samples for other contaminants. In this study, the use of both d-SPE and EMR-L sample preparation and UHPLC-QQQ and UHPLC-Q/TOF analysis methods for shared spiked samples of bovine muscle, kidney, and liver was compared. The results showed that the EMR-L method provided cleaner extracts overall and improved results for several anthelmintics and tranquilizers compared to the d-SPE method, but the EMR-L method gave lower recoveries for certain β-lactam antibiotics. QQQ vs. Q/TOF detection showed similar mixed performance advantages depending on analytes and matrix interferences, with an advantage to Q/TOF for greater possible analytical scope and non-targeted data collection. Either combination of approaches may be used to meet monitoring purposes, with an edge in efficiency to d-SPE, but greater instrument robustness and less matrix effects when analyzing EMR-L extracts. Graphical abstract Comparison of cleanup methods in the analysis of veterinary drug residues in bovine tissues.

  13. A conflict of analysis: analytical chemistry and milk adulteration in Victorian Britain.

    PubMed

    Steere-Williams, Jacob

    2014-08-01

    This article centres on a particularly intense debate within British analytical chemistry in the late nineteenth century, between local public analysts and the government chemists of the Inland Revenue Service. The two groups differed in both practical methodologies and in the interpretation of analytical findings. The most striking debates in this period were related to milk analysis, highlighted especially in Victorian courtrooms. It was in protracted court cases, such as the well known Manchester Milk Case in 1883, that analytical chemistry was performed between local public analysts and the government chemists, who were often both used as expert witnesses. Victorian courtrooms were thus important sites in the context of the uneven professionalisation of chemistry. I use this tension to highlight what Christopher Hamlin has called the defining feature of Victorian public health, namely conflicts of professional jurisdiction, which adds nuance to histories of the struggle of professionalisation and public credibility in analytical chemistry.

  14. Living in Two Worlds. How 'Jungian' am I?

    PubMed

    Morgan, Helen

    2018-06-01

    As a so-called 'Developmental Jungian' the author of this paper was raised bilingual - speaking both psychoanalytic and Jungian languages. Early on in her training an analysand brought a dream which seemed to capture an inherent tension regarding the analyst's role in the analytic relationship. The paper is a personal exploration of the potentially creative nature of this tension through focussing on the dream and the work with the dreamer. © 2018, The Society of Analytical Psychology.

  15. Computationally efficient approach for solving time dependent diffusion equation with discrete temporal convolution applied to granular particles of battery electrodes

    NASA Astrophysics Data System (ADS)

    Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž

    2015-03-01

    The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.

  16. 78 FR 23261 - Solicitation for Nominations for Members of the U.S. Preventive Services Task Force (USPSTF)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-18

    ... during conference calls and via email discussions. Member duties include prioritizing topics, designing... their expertise in methodological issues such as meta-analysis, analytic modeling or clinical...

  17. Double diffusive magnetohydrodynamic (MHD) mixed convective slip flow along a radiating moving vertical flat plate with convective boundary condition.

    PubMed

    Rashidi, Mohammad M; Kavyani, Neda; Abelman, Shirley; Uddin, Mohammed J; Freidoonimehr, Navid

    2014-01-01

    In this study combined heat and mass transfer by mixed convective flow along a moving vertical flat plate with hydrodynamic slip and thermal convective boundary condition is investigated. Using similarity variables, the governing nonlinear partial differential equations are converted into a system of coupled nonlinear ordinary differential equations. The transformed equations are then solved using a semi-numerical/analytical method called the differential transform method and results are compared with numerical results. Close agreement is found between the present method and the numerical method. Effects of the controlling parameters, including convective heat transfer, magnetic field, buoyancy ratio, hydrodynamic slip, mixed convective, Prandtl number and Schmidt number are investigated on the dimensionless velocity, temperature and concentration profiles. In addition effects of different parameters on the skin friction factor, [Formula: see text], local Nusselt number, [Formula: see text], and local Sherwood number [Formula: see text] are shown and explained through tables.

  18. Advanced statistical methods for improved data analysis of NASA astrophysics missions

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.

    1992-01-01

    The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.

  19. Abschätzung des Einflusses von Parameterunsicherheiten bei der Planung und Auswertung von Tracertests unter Verwendung von Ensembleprognosen

    NASA Astrophysics Data System (ADS)

    Klotzsch, Stephan; Binder, Martin; Händel, Falk

    2017-06-01

    While planning tracer tests, uncertainties in geohydraulic parameters should be considered as an important factor. Neglecting these uncertainties can lead to missing the tracer breakthrough, for example. One way to consider uncertainties during tracer test design is the so called ensemble forecast. The applicability of this method to geohydrological problems is demonstrated by coupling the method with two analytical solute transport models. The algorithm presented in this article is suitable for prediction as well as parameter estimation. The parameter estimation function can be used in a tracer test for reducing the uncertainties in the measured data which can improve the initial prediction. The algorithm was implemented into a software tool which is freely downloadable from the website of the Institute for Groundwater Management at TU Dresden, Germany.

  20. Phenotype definition and development--contributions from Group 7.

    PubMed

    Wilcox, Marsha A; Paterson, Andrew D

    2009-01-01

    The papers in Genetic Analysis Workshop 16 Group 7 covered a wide range of topics. The effects of confounder misclassification and selection bias on association results were examined by one group. Another focused on bias introduced by various methods of accounting for treatment effects. Two groups used related methods to derive phenotypic traits. They used different analytic strategies for genetic associations with non-overlapping results (but because they used different sets of single-nucleotide polymorphisms (SNPs) and significance criteria, this is not surprising). Another group relied on the well-characterized definition of type 2 diabetes to show benefits of a novel predictive test. Transmission-ratio distortion was the focus of another paper. The results were extended to show a potential secondary benefit of the test to identify potentially mis-called SNPs. (c) 2009 Wiley-Liss, Inc.

  1. Inverse Scattering and Local Observable Algebras in Integrable Quantum Field Theories

    NASA Astrophysics Data System (ADS)

    Alazzawi, Sabina; Lechner, Gandalf

    2017-09-01

    We present a solution method for the inverse scattering problem for integrable two-dimensional relativistic quantum field theories, specified in terms of a given massive single particle spectrum and a factorizing S-matrix. An arbitrary number of massive particles transforming under an arbitrary compact global gauge group is allowed, thereby generalizing previous constructions of scalar theories. The two-particle S-matrix S is assumed to be an analytic solution of the Yang-Baxter equation with standard properties, including unitarity, TCP invariance, and crossing symmetry. Using methods from operator algebras and complex analysis, we identify sufficient criteria on S that imply the solution of the inverse scattering problem. These conditions are shown to be satisfied in particular by so-called diagonal S-matrices, but presumably also in other cases such as the O( N)-invariant nonlinear {σ}-models.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altsybeyev, V.V., E-mail: v.altsybeev@spbu.ru; Ponomarev, V.A.

    The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. Themore » results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.« less

  3. Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT.

    PubMed

    Liu, L J; Yu, K X; Zhang, M; Zhuang, G; Li, X; Yuan, T; Rao, B; Zhao, Q

    2016-01-01

    In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distribution of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.

  4. An improved version of NCOREL: A computer program for 3-D nonlinear supersonic potential flow computations

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1988-01-01

    A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.

  5. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  6. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  7. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  8. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  9. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  10. SAM Radiochemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.

  11. Advancing Clinical Proteomics via Analysis Based on Biological Complexes: A Tale of Five Paradigms.

    PubMed

    Goh, Wilson Wen Bin; Wong, Limsoon

    2016-09-02

    Despite advances in proteomic technologies, idiosyncratic data issues, for example, incomplete coverage and inconsistency, resulting in large data holes, persist. Moreover, because of naïve reliance on statistical testing and its accompanying p values, differential protein signatures identified from such proteomics data have little diagnostic power. Thus, deploying conventional analytics on proteomics data is insufficient for identifying novel drug targets or precise yet sensitive biomarkers. Complex-based analysis is a new analytical approach that has potential to resolve these issues but requires formalization. We categorize complex-based analysis into five method classes or paradigms and propose an even-handed yet comprehensive evaluation rubric based on both simulated and real data. The first four paradigms are well represented in the literature. The fifth and newest paradigm, the network-paired (NP) paradigm, represented by a method called Extremely Small SubNET (ESSNET), dominates in precision-recall and reproducibility, maintains strong performance in small sample sizes, and sensitively detects low-abundance complexes. In contrast, the commonly used over-representation analysis (ORA) and direct-group (DG) test paradigms maintain good overall precision but have severe reproducibility issues. The other two paradigms considered here are the hit-rate and rank-based network analysis paradigms; both of these have good precision-recall and reproducibility, but they do not consider low-abundance complexes. Therefore, given its strong performance, NP/ESSNET may prove to be a useful approach for improving the analytical resolution of proteomics data. Additionally, given its stability, it may also be a powerful new approach toward functional enrichment tests, much like its ORA and DG counterparts.

  12. CREATE-IP and CREATE-V: Data and Services Update

    NASA Astrophysics Data System (ADS)

    Carriere, L.; Potter, G. L.; Hertz, J.; Peters, J.; Maxwell, T. P.; Strong, S.; Shute, J.; Shen, Y.; Duffy, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center and the Earth System Grid Federation (ESGF) are working together to build a uniform environment for the comparative study and use of a group of reanalysis datasets of particular importance to the research community. This effort is called the Collaborative REAnalysis Technical Environment (CREATE) and it contains two components: the CREATE-Intercomparison Project (CREATE-IP) and CREATE-V. This year's efforts included generating and publishing an atmospheric reanalysis ensemble mean and spread and improving the analytics available through CREATE-V. Related activities included adding access to subsets of the reanalysis data through ArcGIS and expanding the visualization tool to GMAO forecast data. This poster will present the access mechanisms to this data and use cases including example Jupyter Notebook code. The reanalysis ensemble was generated using two methods, first using standard Python tools for regridding, extracting levels and creating the ensemble mean and spread on a virtual server in the NCCS environment. The second was using a new analytics software suite, the Earth Data Analytics Services (EDAS), coupled with a high-performance Data Analytics and Storage System (DASS) developed at the NCCS. Results were compared to validate the EDAS methodologies, and the results, including time to process, will be presented. The ensemble includes selected 6 hourly and monthly variables, regridded to 1.25 degrees, with 24 common levels used for the 3D variables. Use cases for the new data and services will be presented, including the use of EDAS for the backend analytics on CREATE-V, the use of the GMAO forecast aerosol and cloud data in CREATE-V, and the ability to connect CREATE-V data to NCCS ArcGIS services.

  13. Foreign Bodies in Dried Mushrooms Marketed in Italy.

    PubMed

    Schiavo, Maria Rita; Manno, Claudia; Zimmardi, Antonina; Vodret, Bruna; Tilocca, Maria Giovanna; Altissimi, Serena; Haouet, Naceur M

    2015-11-02

    The presence of foreign bodies in mushrooms affects their marketability and may result in health risks to consumers. The inspection of fresh or dried mushrooms today is very important in view of the increased consumption of this kind of food. Ten samples of dried mushrooms collected in supermarkets were examined for evidence of entomological contamination by macro and microscopic analytical methods, the so-called filth-test . A total of 49 46 determinations, comprising 15 g of the vegetable matrix, were made. The microscopic filth test consistently detected an irregular distribution of physical contaminants following repeated determinations of the same sample. Visual examination, on the other hand, was not sufficient to ensure a product free of contaminants.

  14. Bohr Hamiltonian for γ = 30° with Davidson potential

    NASA Astrophysics Data System (ADS)

    Yigitoglu, Ibrahim; Gokbulut, Melek

    2018-03-01

    A γ-rigid solution of the Bohr Hamiltonian for γ = 30° is constructed with the Davidson potential in the β part. This solution is going to be called Z(4)-D. The energy eigenvalues and wave functions are obtained by using the analytic method developed by Nikiforov and Uvarov. The calculated intraband and interband B(E2) transitions rates are presented and compared with the Z(4) model predictions. The staggering behavior in γ-bands is considered to search Z(4) -D candidate nuclei. A variational procedure is applied to demonstrate that the Z(4) model is a solution of the critical point at the shape phase transition from spherical to rigid triaxial rotor.

  15. Method and apparatus for analyzing error conditions in a massively parallel computer system by identifying anomalous nodes within a communicator set

    DOEpatents

    Gooding, Thomas Michael [Rochester, MN

    2011-04-19

    An analytical mechanism for a massively parallel computer system automatically analyzes data retrieved from the system, and identifies nodes which exhibit anomalous behavior in comparison to their immediate neighbors. Preferably, anomalous behavior is determined by comparing call-return stack tracebacks for each node, grouping like nodes together, and identifying neighboring nodes which do not themselves belong to the group. A node, not itself in the group, having a large number of neighbors in the group, is a likely locality of error. The analyzer preferably presents this information to the user by sorting the neighbors according to number of adjoining members of the group.

  16. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  17. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  18. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  19. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  20. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  1. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  2. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  3. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...

  4. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  5. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  6. COHERENT NETWORK ANALYSIS FOR CONTINUOUS GRAVITATIONAL WAVE SIGNALS IN A PULSAR TIMING ARRAY: PULSAR PHASES AS EXTRINSIC PARAMETERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Mohanty, Soumya D.; Jenet, Fredrick A., E-mail: ywang12@hust.edu.cn

    2015-12-20

    Supermassive black hole binaries are one of the primary targets of gravitational wave (GW) searches using pulsar timing arrays (PTAs). GW signals from such systems are well represented by parameterized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for PTAs: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based GW searches, where the so-called pulsar phasemore » parameters are maximized numerically, was addressed in an earlier paper. In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the PTA. Our results show that for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, the scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.« less

  7. Time-dependent structural transformation analysis to high-level Petri net model with active state transition diagram.

    PubMed

    Li, Chen; Nagasaki, Masao; Saito, Ayumu; Miyano, Satoru

    2010-04-01

    With an accumulation of in silico data obtained by simulating large-scale biological networks, a new interest of research is emerging for elucidating how living organism functions over time in cells. Investigating the dynamic features of current computational models promises a deeper understanding of complex cellular processes. This leads us to develop a method that utilizes structural properties of the model over all simulation time steps. Further, user-friendly overviews of dynamic behaviors can be considered to provide a great help in understanding the variations of system mechanisms. We propose a novel method for constructing and analyzing a so-called active state transition diagram (ASTD) by using time-course simulation data of a high-level Petri net. Our method includes two new algorithms. The first algorithm extracts a series of subnets (called temporal subnets) reflecting biological components contributing to the dynamics, while retaining positive mathematical qualities. The second one creates an ASTD composed of unique temporal subnets. ASTD provides users with concise information allowing them to grasp and trace how a key regulatory subnet and/or a network changes with time. The applicability of our method is demonstrated by the analysis of the underlying model for circadian rhythms in Drosophila. Building ASTD is a useful means to convert a hybrid model dealing with discrete, continuous and more complicated events to finite time-dependent states. Based on ASTD, various analytical approaches can be applied to obtain new insights into not only systematic mechanisms but also dynamics.

  8. Single Particle Analysis by Combined Chemical Imaging to Study Episodic Air Pollution Events in Vienna

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Eitenberger, Elisabeth; Friedbacher, Gernot; Brenner, Florian; Hutter, Herbert; Schauer, Gerhard; Kistler, Magdalena; Greilinger, Marion; Lohninger, Hans; Lendl, Bernhard; Kasper-Giebl, Anne

    2017-04-01

    The aerosol composition of a city like Vienna is characterized by a complex interaction of local emissions and atmospheric input on a regional and continental scale. The identification of major aerosol constituents for basic source appointment and air quality issues needs a high analytical effort. Exceptional episodic air pollution events strongly change the typical aerosol composition of a city like Vienna on a time-scale of few hours to several days. Analyzing the chemistry of particulate matter from these events is often hampered by the sampling time and related sample amount necessary to apply the full range of bulk analytical methods needed for chemical characterization. Additionally, morphological and single particle features are hardly accessible. Chemical Imaging evolved to a powerful tool for image-based chemical analysis of complex samples. As a complementary technique to bulk analytical methods, chemical imaging can address a new access to study air pollution events by obtaining major aerosol constituents with single particle features at high temporal resolutions and small sample volumes. The analysis of the chemical imaging datasets is assisted by multivariate statistics with the benefit of image-based chemical structure determination for direct aerosol source appointment. A novel approach in chemical imaging is combined chemical imaging or so-called multisensor hyperspectral imaging, involving elemental imaging (electron microscopy-based energy dispersive X-ray imaging), vibrational imaging (Raman micro-spectroscopy) and mass spectrometric imaging (Time-of-Flight Secondary Ion Mass Spectrometry) with subsequent combined multivariate analytics. Combined chemical imaging of precipitated aerosol particles will be demonstrated by the following examples of air pollution events in Vienna: Exceptional episodic events like the transformation of Saharan dust by the impact of the city of Vienna will be discussed and compared to samples obtained at a high alpine background site (Sonnblick Observatory, Saharan Dust Event from April 2016). Further, chemical imaging of biological aerosol constituents of an autumnal pollen breakout in Vienna, with background samples from nearby locations from November 2016 will demonstrate the advantages of the chemical imaging approach. Additionally, the chemical fingerprint of an exceptional air pollution event from a local emission source, caused by the pull down process of a building in Vienna will unravel the needs for multisensor imaging, especially the combinational access. Obtained chemical images will be correlated to bulk analytical results. Benefits of the overall methodical access by combining bulk analytics and combined chemical imaging of exceptional episodic air pollution events will be discussed.

  9. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...

  10. Stable oxygen and hydrogen isotopes of brines - comparing isotope ratio mass spectrometry and isotope ratio infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Ahrens, Christian; Koeniger, Paul; van Geldern, Robert; Stadler, Susanne

    2013-04-01

    Today's standard analytical methods for high precision stable isotope analysis of fluids are gas-water equilibration and high temperature pyrolysis coupled to isotope ratio mass spectrometers (IRMS). In recent years, relatively new laser-based analytical instruments entered the market that are said to allow high isotope precision data on nearly every media. This optical technique is referred to as isotope ratio infrared spectroscopy (IRIS). The objective of this study is to evaluate the capability of this new instrument type for highly saline solutions and a comparison of the analytical results with traditional IRMS analysis. It has been shown for the equilibration method that the presence of salts influences the measured isotope values depending on the salt concentration (see Lécuyer et al, 2009; Martineau, 2012). This so-called 'isotope salt effect' depends on the salt type and salt concentration. These factors change the activity in the fluid and therefore shift the isotope ratios measured by the equilibration method. Consequently, correction factors have to be applied to these analytical data. Direct conversion techniques like pyrolysis or the new laser instruments allow the measurement of the water molecule from the sample directly and should therefore not suffer from the salt effect, i.e. no corrections of raw values are necessary. However, due to high salt concentrations this might cause technical problems with the analytical hardware and may require labor-intensive sample preparation (e.g. vacuum distillation). This study evaluates the salt isotope effect for the IRMS equilibration technique (Thermo Gasbench II coupled to Delta Plus XP) and the laser-based IRIS instruments with liquid injection (Picarro L2120-i). Synthetic salt solutions (NaCl, KCl, CaCl2, MgCl2, MgSO4, CaSO4) and natural brines collected from the Stassfurt Salt Anticline (Germany; Stadler et al., 2012) were analysed with both techniques. Salt concentrations ranged from seawater salinity up to full saturation. References Lécuyer, C. et al. (2009). Chem. Geol., 264, 122-126. [doi:10.1016/j.chemgeo.2009.02.017] Martineau, F. et al. (2012). Chem. Geol., 291, 236-240. [doi:10.1016/j.chemgeo.2011.10.017] Stadler, S. et al. (2012). Chem. Geol., 294-295, 226-242. [doi:10.1016/j.chemgeo.2011.12.006

  11. Panel methods: An introduction

    NASA Technical Reports Server (NTRS)

    Erickson, Larry L.

    1990-01-01

    Panel methods are numerical schemes for solving (the Prandtl-Glauert equation) for linear, inviscid, irrotational flow about aircraft flying at subsonic or supersonic speeds. The tools at the panel-method user's disposal are (1) surface panels of source-doublet-vorticity distributions that can represent nearly arbitrary geometry, and (2) extremely versatile boundary condition capabilities that can frequently be used for creative modeling. Panel-method capabilities and limitations, basic concepts common to all panel-method codes, different choices that were made in the implementation of these concepts into working computer programs, and various modeling techniques involving boundary conditions, jump properties, and trailing wakes are discussed. An approach for extending the method to nonlinear transonic flow is also presented. Three appendices supplement the main test. In appendix 1, additional detail is provided on how the basic concepts are implemented into a specific computer program (PANAIR). In appendix 2, it is shown how to evaluate analytically the fundamental surface integral that arises in the expressions for influence-coefficients, and evaluate its jump property. In appendix 3, a simple example is used to illustrate the so-called finite part of the improper integrals.

  12. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  13. A high-fidelity satellite ephemeris program for Earth satellites in eccentric orbits

    NASA Technical Reports Server (NTRS)

    Simmons, David R.

    1990-01-01

    A program for mission planning called the Analytic Satellite Ephemeris Program (ASEP), produces projected data for orbits that remain fairly close to the Earth. ASEP does not take into account lunar and solar perturbations. These perturbations are accounted for in another program called GRAVE, which incorporates more flexible means of input for initial data, provides additional kinds of output information, and makes use of structural programming techniques to make the program more understandable and reliable. GRAVE was revised, and a new program called ORBIT was developed. It is divided into three major phases: initialization, integration, and output. Results of the program development are presented.

  14. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...

  15. Extremely Fast Numerical Integration of Ocean Surface Wave Dynamics

    DTIC Science & Technology

    2007-09-30

    sub-processor must be added as shown in the blue box of Fig. 1. We first consider the Kadomtsev - Petviashvili (KP) equation ηt + coηx +αηηx + βη ...analytic integration of the so-called “soliton equations ,” I have discovered how the GFT can be used to solved higher order equations for which study...analytical study and extremely fast numerical integration of the extended nonlinear Schroedinger equation for fully three dimensional wave motion

  16. Elegant Ince—Gaussian breathers in strongly nonlocal nonlinear media

    NASA Astrophysics Data System (ADS)

    Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi

    2012-06-01

    A novel class of optical breathers, called elegant Ince—Gaussian breathers, are presented in this paper. They are exact analytical solutions to Snyder and Mitchell's mode in an elliptic coordinate system, and their transverse structures are described by Ince-polynomials with complex arguments and a Gaussian function. We provide convincing evidence for the correctness of the solutions and the existence of the breathers via comparing the analytical solutions with numerical simulation of the nonlocal nonlinear Schrödinger equation.

  17. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    NASA Astrophysics Data System (ADS)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification, and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.

  18. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  19. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  20. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  1. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  2. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  3. Biases and Power for Groups Comparison on Subjective Health Measurements

    PubMed Central

    Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique

    2012-01-01

    Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald’s test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative. PMID:23115620

  4. Calculation of the critical overdensity in the spherical-collapse approximation

    NASA Astrophysics Data System (ADS)

    Herrera, D.; Waga, I.; Jorás, S. E.

    2017-03-01

    Critical overdensity δc is a key concept in estimating the number count of halos for different redshift and halo-mass bins, and therefore, it is a powerful tool to compare cosmological models to observations. There are currently two different prescriptions in the literature for its calculation, namely, the differential-radius and the constant-infinity methods. In this work we show that the latter yields precise results only if we are careful in the definition of the so-called numerical infinities. Although the subtleties we point out are crucial ingredients for an accurate determination of δc both in general relativity and in any other gravity theory, we focus on f (R )-modified gravity models in the metric approach; in particular, we use the so-called large (F =1 /3 ) and small-field (F =0 ) limits. For both of them, we calculate the relative errors (between our method and the others) in the critical density δc, in the comoving number density of halos per logarithmic mass interval nln M, and in the number of clusters at a given redshift in a given mass bin Nbin, as functions of the redshift. We have also derived an analytical expression for the density contrast in the linear regime as a function of the collapse redshift zc and Ωm 0 for any F .

  5. Brownian aggregation rate of colloid particles with several active sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nekrasov, Vyacheslav M.; Yurkin, Maxim A.; Chernyshev, Andrei V., E-mail: chern@ns.kinetics.nsc.ru

    2014-08-14

    We theoretically analyze the aggregation kinetics of colloid particles with several active sites. Such particles (so-called “patchy particles”) are well known as chemically anisotropic reactants, but the corresponding rate constant of their aggregation has not yet been established in a convenient analytical form. Using kinematic approximation for the diffusion problem, we derived an analytical formula for the diffusion-controlled reaction rate constant between two colloid particles (or clusters) with several small active sites under the following assumptions: the relative translational motion is Brownian diffusion, and the isotropic stochastic reorientation of each particle is Markovian and arbitrarily correlated. This formula was shownmore » to produce accurate results in comparison with more sophisticated approaches. Also, to account for the case of a low number of active sites per particle we used Monte Carlo stochastic algorithm based on Gillespie method. Simulations showed that such discrete model is required when this number is less than 10. Finally, we applied the developed approach to the simulation of immunoagglutination, assuming that the formed clusters have fractal structure.« less

  6. Electronic tongue: An analytical gustatory tool

    PubMed Central

    Latha, Rewanthwar Swathi; Lakshmi, P. K.

    2012-01-01

    Taste is an important organoleptic property governing acceptance of products for administration through mouth. But majority of drugs available are bitter in taste. For patient acceptability and compliance, bitter taste drugs are masked by adding several flavoring agents. Thus, taste assessment is one important quality control parameter for evaluating taste-masked formulations. The primary method for the taste measurement of drug substances and formulations is by human panelists. The use of sensory panelists is very difficult and problematic in industry and this is due to the potential toxicity of drugs and subjectivity of taste panelists, problems in recruiting taste panelists, motivation and panel maintenance are significantly difficult when working with unpleasant products. Furthermore, Food and Drug Administration (FDA)-unapproved molecules cannot be tested. Therefore, analytical taste-sensing multichannel sensory system called as electronic tongue (e-tongue or artificial tongue) which can assess taste have been replacing the sensory panelists. Thus, e-tongue includes benefits like reducing reliance on human panel. The present review focuses on the electrochemical concepts in instrumentation, performance qualification of E-tongue, and applications in various fields. PMID:22470887

  7. 7 CFR 94.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...

  8. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  9. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  10. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  11. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  12. Who Qualifies for Financial Aid?

    ERIC Educational Resources Information Center

    Deitch, Kenneth M.

    1982-01-01

    The decisions an institution makes about tuition and student aid constitute its pricing policy and locate its market position. An analytical device called the "aid eligibility frontier" is used to analyze the current pricing system; potential problems are discussed. (Author/MSE)

  13. Interpretive Management: What General Managers Can Learn from Design.

    ERIC Educational Resources Information Center

    Lester, Richard K.; Piore, Michael J.; Malek, Kamal M.

    1998-01-01

    An analytical management approach reflects a traditional perspective and an interpretive approach involves a perspective suited to rapidly changing, unpredictable markets. Both approaches are valid, but each serves different purposes and calls for different strategies and skills. (JOW)

  14. Assessing Proposals for New Global Health Treaties: An Analytic Framework.

    PubMed

    Hoffman, Steven J; Røttingen, John-Arne; Frenk, Julio

    2015-08-01

    We have presented an analytic framework and 4 criteria for assessing when global health treaties have reasonable prospects of yielding net positive effects. First, there must be a significant transnational dimension to the problem being addressed. Second, the goals should justify the coercive nature of treaties. Third, proposed global health treaties should have a reasonable chance of achieving benefits. Fourth, treaties should be the best commitment mechanism among the many competing alternatives. Applying this analytic framework to 9 recent calls for new global health treaties revealed that none fully meet the 4 criteria. Efforts aiming to better use or revise existing international instruments may be more productive than is advocating new treaties.

  15. Assessing Proposals for New Global Health Treaties: An Analytic Framework

    PubMed Central

    Røttingen, John-Arne; Frenk, Julio

    2015-01-01

    We have presented an analytic framework and 4 criteria for assessing when global health treaties have reasonable prospects of yielding net positive effects. First, there must be a significant transnational dimension to the problem being addressed. Second, the goals should justify the coercive nature of treaties. Third, proposed global health treaties should have a reasonable chance of achieving benefits. Fourth, treaties should be the best commitment mechanism among the many competing alternatives. Applying this analytic framework to 9 recent calls for new global health treaties revealed that none fully meet the 4 criteria. Efforts aiming to better use or revise existing international instruments may be more productive than is advocating new treaties. PMID:26066926

  16. Considerations regarding the validation of chromatographic mass spectrometric methods for the quantification of endogenous substances in forensics.

    PubMed

    Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra

    2018-02-01

    The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Accurate analysis of parabens in human urine using isotope-dilution ultrahigh-performance liquid chromatography-high resolution mass spectrometry.

    PubMed

    Zhou, Hui-Ting; Chen, Hsin-Chang; Ding, Wang-Hsien

    2018-02-20

    An analytical method that utilizes isotope-dilution ultrahigh-performance liquid chromatography coupled with hybrid quadrupole time-of-flight mass spectrometry (UHPLC-QTOF-MS or called UHPLC-HRMS) was developed, and validated to be highly precise and accurate for the detection of nine parabens (methyl-, ethyl-, propyl-, isopropyl-, butyl-, isobutyl-, pentyl-, hexyl-, and benzyl-parabens) in human urine samples. After sample preparation by ultrasound-assisted emulsification microextraction (USAEME), the extract was directly injected into UHPLC-HRMS. By using negative electrospray ionization in the multiple reaction monitoring (MRM) mode and measuring the peak area ratios of both the natural and the labeled-analogues in the samples and calibration standards, the target analytes could be accurately identified and quantified. Another use for the labeled-analogues was to correct for systematic errors associated with the analysis, such as the matrix effect and other variations. The limits of quantitation (LOQs) were ranging from 0.3 to 0.6 ng/mL. High precisions for both repeatability and reproducibility were obtained ranging from 1 to 8%. High trueness (mean extraction recovery, or called accuracy) ranged from 93 to 107% on two concentration levels. According to preliminary results, the total concentrations of four most detected parabens (methyl-, ethyl-, propyl- and butyl-) ranged from 0.5 to 79.1 ng/mL in male urine samples, and from 17 to 237 ng/mL in female urine samples. Interestingly, two infrequently detected pentyl- and hexyl-parabens were found in one of the male samples in this study. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Integrating research tools to support the management of social-ecological systems under climate change

    USGS Publications Warehouse

    Miller, Brian W.; Morisette, Jeffrey T.

    2014-01-01

    Developing resource management strategies in the face of climate change is complicated by the considerable uncertainty associated with projections of climate and its impacts and by the complex interactions between social and ecological variables. The broad, interconnected nature of this challenge has resulted in calls for analytical frameworks that integrate research tools and can support natural resource management decision making in the face of uncertainty and complex interactions. We respond to this call by first reviewing three methods that have proven useful for climate change research, but whose application and development have been largely isolated: species distribution modeling, scenario planning, and simulation modeling. Species distribution models provide data-driven estimates of the future distributions of species of interest, but they face several limitations and their output alone is not sufficient to guide complex decisions for how best to manage resources given social and economic considerations along with dynamic and uncertain future conditions. Researchers and managers are increasingly exploring potential futures of social-ecological systems through scenario planning, but this process often lacks quantitative response modeling and validation procedures. Simulation models are well placed to provide added rigor to scenario planning because of their ability to reproduce complex system dynamics, but the scenarios and management options explored in simulations are often not developed by stakeholders, and there is not a clear consensus on how to include climate model outputs. We see these strengths and weaknesses as complementarities and offer an analytical framework for integrating these three tools. We then describe the ways in which this framework can help shift climate change research from useful to usable.

  19. A physically-based analytical model to describe effective excess charge for streaming potential generation in saturated porous media

    NASA Astrophysics Data System (ADS)

    Jougnot, D.; Guarracino, L.

    2016-12-01

    The self-potential (SP) method is considered by most researchers the only geophysical method that is directly sensitive to groundwater flow. One source of SP signals, the so-called streaming potential, results from the presence of an electrical double layer at the mineral-pore water interface. When water flows through the pore space, it gives rise to a streaming current and a resulting measurable electrical voltage. Different approaches have been proposed to predict streaming potentials in porous media. One approach is based on the excess charge which is effectively dragged in the medium by the water flow. Following a recent theoretical framework, we developed a physically-based analytical model to predict the effective excess charge in saturated porous media. In this study, the porous media is described by a bundle of capillary tubes with a fractal pore-size distribution. First, an analytical relationship is derived to determine the effective excess charge for a single capillary tube as a function of the pore water salinity. Then, this relationship is used to obtain both exact and approximated expressions for the effective excess charge at the Representative Elementary Volume (REV) scale. The resulting analytical relationship allows the determination of the effective excess charge as a function of pore water salinity, fractal dimension and hydraulic parameters like porosity and permeability, which are also obtained at the REV scale. This new model has been successfully tested against data from the literature of different sources. One of the main finding of this study is that it provides a mechanistic explanation to the empirical dependence between the effective excess charge and the permeability that has been found by various researchers. The proposed petrophysical relationship also contributes to understand the role of porosity and water salinity on effective excess charge and will help to push further the use of streaming potential to monitor groundwater flow.

  20. Profile fitting in crowded astronomical images

    NASA Astrophysics Data System (ADS)

    Manish, Raja

    Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.

  1. What's in a name: what analyst and patient call each other.

    PubMed

    Barron, Grace Caroline

    2006-01-01

    Awkward moments often arise between patient and analyst involving the question, "What do we call each other?" The manner in which the dyad address each other contains material central to the patient's inner life. Names, like dreams, deserve a privileged status as providing a royal road into the paradoxical analytic relationship and the unconscious conflicts that feed it. Whether an analyst addresses the patient formally, informally, or not at all, awareness of the issues surrounding names is important.

  2. Considerations in detecting CDC select agents under field conditions

    NASA Astrophysics Data System (ADS)

    Spinelli, Charles; Soelberg, Scott; Swanson, Nathaneal; Furlong, Clement; Baker, Paul

    2008-04-01

    Surface Plasmon Resonance (SPR) has become a widely accepted technique for real-time detection of interactions between receptor molecules and ligands. Antibody may serve as receptor and can be attached to the gold surface of the SPR device, while candidate analyte fluids contact the detecting antibody. Minute, but detectable, changes in refractive indices (RI) indicate that analyte has bound to the antibody. A decade ago, an inexpensive, robust, miniature and fully integrated SPR chip, called SPREETA, was developed. University of Washington (UW) researchers subsequently developed a portable, temperature-regulated instrument, called SPIRIT, to simultaneously use eight of these three-channel SPREETA chips. A SPIRIT prototype instrument was tested in the field, coupled to a remote reporting system on a surrogate unmanned aerial vehicle (UAV). Two target protein analytes were released sequentially as aerosols with low analyte concentration during each of three flights and were successfully detected and verified. Laboratory experimentation with a more advanced SPIRIT instrument demonstrated detection of very low levels of several select biological agents that might be employed by bioterrorists. Agent detection under field-like conditions is more challenging, especially as analyte concentrations are reduced and complex matricies are introduced. Two different sample preconditioning protocols have been developed for select agents in complex matrices. Use of these preconditioning techniques has allowed laboratory detection in spiked heavy mud of Francisella tularensis at 10 3 CFU/ml, Bacillus anthracis spores at 10 3 CFU/ml, Staphylococcal enterotoxin B (SEB) at 1 ng/ml, and Vaccinia virus (a smallpox simulant) at 10 5 PFU/ml. Ongoing experiments are aimed at simultaneous detection of multiple agents in spiked heavy mud, using a multiplex preconditioning protocol.

  3. Numerical and Experimental Studies on Impact Loaded Concrete Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo

    2006-07-01

    An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less

  4. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  5. Fuel management optimization using genetic algorithms and code independence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1994-12-31

    Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less

  6. Method for characterization of low molecular weight organic acids in atmospheric aerosols using ion chromatography mass spectrometry.

    PubMed

    Brent, Lacey C; Reiner, Jessica L; Dickerson, Russell R; Sander, Lane C

    2014-08-05

    The structural composition of PM2.5 monitored in the atmosphere is usually divided by the analysis of organic carbon, black (also called elemental) carbon, and inorganic salts. The characterization of the chemical composition of aerosols represents a significant challenge to analysts, and studies are frequently limited to determination of aerosol bulk properties. To better understand the potential health effects and combined interactions of components in aerosols, a variety of measurement techniques for individual analytes in PM2.5 need to be implemented. The method developed here for the measurement of organic acids achieves class separation of aliphatic monoacids, aliphatic diacids, aromatic acids, and polyacids. The selective ion monitoring capability of a triple quadropole mass analyzer was frequently capable of overcoming instances of incomplete separations. Standard Reference Material (SRM) 1649b Urban Dust was characterized; 34 organic acids were qualitatively identified, and 6 organic acids were quantified.

  7. Theoretical model for Sub-Doppler Cooling with EIT System

    NASA Astrophysics Data System (ADS)

    He, Peiru; Tengdin, Phoebe; Anderson, Dana; Rey, Ana Maria; Holland, Murray

    2016-05-01

    We propose a of sub-Doppler cooling mechanism that takes advantage of the unique spectral features and extreme dispersion generated by the so-called Electromagnetically Induced Transparency (EIT) effect, a destructive quantum interference phenomenon experienced by atoms with Lambda-shaped energy levels when illuminated by two light fields with appropriate frequencies. By detuning the probe lasers slightly from the ``dark resonance'', we observe that atoms can be significantly cooled down by the strong viscous force within the transparency window, while being just slightly heated by the diffusion caused by the small absorption near resonance. In contrast to polarization gradient cooling or EIT sideband cooling, no external magnetic field or external confining potential are required. Using a semi-classical method, analytical expressions, and numerical simulations, we demonstrate that the proposed EIT cooling method can lead to temperatures well below the Doppler limit. This work is supported by NSF and NIST.

  8. Numerical method for computing Maass cusp forms on triply punctured two-sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, K. T.; Kamari, H. M.; Zainuddin, H.

    2014-03-05

    A quantum mechanical system on a punctured surface modeled on hyperbolic space has always been an important subject of research in mathematics and physics. This corresponding quantum system is governed by the Schrödinger equation whose solutions are the Maass waveforms. Spectral studies on these Maass waveforms are known to contain both continuous and discrete eigenvalues. The discrete eigenfunctions are usually called the Maass Cusp Forms (MCF) where their discrete eigenvalues are not known analytically. We introduce a numerical method based on Hejhal and Then algorithm using GridMathematica for computing MCF on a punctured surface with three cusps namely the triplymore » punctured two-sphere. We also report on a pullback algorithm for the punctured surface and a point locater algorithm to facilitate the complete pullback which are essential parts of the main algorithm.« less

  9. Dynamic Analysis of Large In-Space Deployable Membrane Antennas

    NASA Technical Reports Server (NTRS)

    Fang, Houfei; Yang, Bingen; Ding, Hongli; Hah, John; Quijano, Ubaldo; Huang, John

    2006-01-01

    This paper presents a vibration analysis of an eight-meter diameter membrane reflectarray antenna, which is composed of a thin membrane and a deployable frame. This analysis process has two main steps. In the first step, a two-variable-parameter (2-VP) membrane model is developed to determine the in-plane stress distribution of the membrane due to pre-tensioning, which eventually yields the differential stiffness of the membrane. In the second step, the obtained differential stiffness is incorporated in a dynamic equation governing the transverse vibration of the membrane-frame assembly. This dynamic equation is then solved by a semi-analytical method, called the Distributed Transfer Function Method (DTFM), which produces the natural frequencies and mode shapes of the antenna. The combination of the 2-VP model and the DTFM provides an accurate prediction of the in-plane stress distribution and modes of vibration for the antenna.

  10. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  11. The second Eshelby problem and its solvability

    NASA Astrophysics Data System (ADS)

    Zou, Wen-Nan; Zheng, Quan-Shui

    2012-10-01

    It is still a challenge to clarify the dependence of overall elastic properties of heterogeneous materials on the microstructures of non-elliposodal inhomogeneities (cracks, pores, foreign particles). From the theory of elasticity, the formulation of the perturbance elastic fields, coming from a non-ellipsoidal inhomogeneity embedded in an infinitely extended material with remote constant loading, inevitably involve one or more integral equations. Up to now, due to the mathematical difficulty, there is almost no explicit analytical solution obtained except for the ellipsoidal inhomogeneity. In this paper, we point out the impossibility to transform this inhomogeneity problem into a conventional Eshelby problem by the equivalent inclusion method even if the eigenstrain is chosen to be non-uniform. We also build up an equivalent model, called the second Eshelby problem, to investigate the perturbance stress. It is probably a better template to make use of the profound methods and results of conventional Eshelby problems of non-ellipsoidal inclusions.

  12. Pressure Self-focusing Effect and Novel Methods for Increasing the Maximum Pressure in Traditional and Rotational Diamond Anvil Cells.

    PubMed

    Feng, Biao; Levitas, Valery I

    2017-04-21

    The main principles of producing a region near the center of a sample, compressed in a diamond anvil cell (DAC), with a very high pressure gradient and, consequently, with high pressure are predicted theoretically. The revealed phenomenon of generating extremely high pressure gradient is called the pressure self-focusing effect. Initial analytical predictions utilized generalization of a simplified equilibrium equation. Then, the results are refined using our recent advanced model for elastoplastic material under high pressures in finite element method (FEM) simulations. The main points in producing the pressure self-focusing effect are to use beveled anvils and reach a very thin sample thickness at the center. We find that the superposition of torsion in a rotational DAC (RDAC) offers drastic enhancement of the pressure self-focusing effect and allows one to reach the same pressure under a much lower force and deformation of anvils.

  13. Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L. J.; Yu, K. X.; Zhang, M., E-mail: zhangming@hust.edu.cn

    2016-01-15

    In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distributionmore » of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.« less

  14. Comparing optical test methods for a lightweight primary mirror of a space-borne Cassegrain telescope

    NASA Astrophysics Data System (ADS)

    Lin, Wei-Cheng; Chang, Shenq-Tsong; Yu, Zong-Ru; Lin, Yu-Chuan; Ho, Cheng-Fong; Huang, Ting-Ming; Chen, Cheng-Huan

    2014-09-01

    A Cassegrain telescope with a 450 mm clear aperture was developed for use in a spaceborne optical remote-sensing instrument. Self-weight deformation and thermal distortion were considered: to this end, Zerodur was used to manufacture the primary mirror. The lightweight scheme adopted a hexagonal cell structure yielding a lightweight ratio of 50%. In general, optical testing on a lightweight mirror is a critical technique during both the manufacturing and assembly processes. To prevent unexpected measurement errors that cause erroneous judgment, this paper proposes a novel and reliable analytical method for optical testing, called the bench test. The proposed algorithm was used to distinguish the manufacturing form error from surface deformation caused by the mounting, supporter and gravity effects for the optical testing. The performance of the proposed bench test was compared with a conventional vertical setup for optical testing during the manufacturing process of the lightweight mirror.

  15. Subcarrier intensity modulation for MIMO visible light communications

    NASA Astrophysics Data System (ADS)

    Celik, Yasin; Akan, Aydin

    2018-04-01

    In this paper, subcarrier intensity modulation (SIM) is investigated for multiple-input multiple-output (MIMO) visible light communication (VLC) systems. A new modulation scheme called DC-aid SIM (DCA-SIM) is proposed for the spatial modulation (SM) transmission plan. Then, DCA-SIM is extended for multiple subcarrier case which is called DC-aid Multiple Subcarrier Modulation (DCA-MSM). Bit error rate (BER) performances of the considered system are analyzed for different MIMO schemes. The power efficiencies of DCA-SIM and DCA-MSM are shown in correlated MIMO VLC channels. The upper bound BER performances of the proposed models are obtained analytically for PSK and QAM modulation types in order to validate the simulation results. Additionally, the effect of power imbalance method on the performance of SIM is studied and remarkable power gains are obtained compared to the non-power imbalanced cases. In this work, Pulse amplitude modulation (PAM) and MSM-Index are used as benchmarks for single carrier and multiple carrier cases, respectively. And the results show that the proposed schemes outperform PAM and MSM-Index for considered single carrier and multiple carrier communication scenarios.

  16. Combined Numerical/Analytical Perturbation Solutions of the Navier-Stokes Equations for Aerodynamic Ejector/Mixer Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DeChant, Lawrence Justin

    1998-01-01

    In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.

  17. Application of fast Fourier transform cross-correlation and mass spectrometry data for accurate alignment of chromatograms.

    PubMed

    Zheng, Yi-Bao; Zhang, Zhi-Min; Liang, Yi-Zeng; Zhan, De-Jian; Huang, Jian-Hua; Yun, Yong-Huan; Xie, Hua-Lin

    2013-04-19

    Chromatography has been established as one of the most important analytical methods in the modern analytical laboratory. However, preprocessing of the chromatograms, especially peak alignment, is usually a time-consuming task prior to extracting useful information from the datasets because of the small unavoidable differences in the experimental conditions caused by minor changes and drift. Most of the alignment algorithms are performed on reduced datasets using only the detected peaks in the chromatograms, which means a loss of data and introduces the problem of extraction of peak data from the chromatographic profiles. These disadvantages can be overcome by using the full chromatographic information that is generated from hyphenated chromatographic instruments. A new alignment algorithm called CAMS (Chromatogram Alignment via Mass Spectra) is present here to correct the retention time shifts among chromatograms accurately and rapidly. In this report, peaks of each chromatogram were detected based on Continuous Wavelet Transform (CWT) with Haar wavelet and were aligned against the reference chromatogram via the correlation of mass spectra. The aligning procedure was accelerated by Fast Fourier Transform cross correlation (FFT cross correlation). This approach has been compared with several well-known alignment methods on real chromatographic datasets, which demonstrates that CAMS can preserve the shape of peaks and achieve a high quality alignment result. Furthermore, the CAMS method was implemented in the Matlab language and available as an open source package at http://www.github.com/matchcoder/CAMS. Copyright © 2013. Published by Elsevier B.V.

  18. Modeling landslide recurrence in Seattle, Washington, USA

    USGS Publications Warehouse

    Salciarini, Diana; Godt, Jonathan W.; Savage, William Z.; Baum, Rex L.; Conversini, Pietro

    2008-01-01

    To manage the hazard associated with shallow landslides, decision makers need an understanding of where and when landslides may occur. A variety of approaches have been used to estimate the hazard from shallow, rainfall-triggered landslides, such as empirical rainfall threshold methods or probabilistic methods based on historical records. The wide availability of Geographic Information Systems (GIS) and digital topographic data has led to the development of analytic methods for landslide hazard estimation that couple steady-state hydrological models with slope stability calculations. Because these methods typically neglect the transient effects of infiltration on slope stability, results cannot be linked with historical or forecasted rainfall sequences. Estimates of the frequency of conditions likely to cause landslides are critical for quantitative risk and hazard assessments. We present results to demonstrate how a transient infiltration model coupled with an infinite slope stability calculation may be used to assess shallow landslide frequency in the City of Seattle, Washington, USA. A module called CRF (Critical RainFall) for estimating deterministic rainfall thresholds has been integrated in the TRIGRS (Transient Rainfall Infiltration and Grid-based Slope-Stability) model that combines a transient, one-dimensional analytic solution for pore-pressure response to rainfall infiltration with an infinite slope stability calculation. Input data for the extended model include topographic slope, colluvial thickness, initial water-table depth, material properties, and rainfall durations. This approach is combined with a statistical treatment of rainfall using a GEV (General Extreme Value) probabilistic distribution to produce maps showing the shallow landslide recurrence induced, on a spatially distributed basis, as a function of rainfall duration and hillslope characteristics.

  19. Modal Decomposition of TTV: Inferring Planet Masses and Eccentricities

    NASA Astrophysics Data System (ADS)

    Linial, Itai; Gilbaum, Shmuel; Sari, Re’em

    2018-06-01

    Transit timing variations (TTVs) are a powerful tool for characterizing the properties of transiting exoplanets. However, inferring planet properties from the observed timing variations is a challenging task, which is usually addressed by extensive numerical searches. We propose a new, computationally inexpensive method for inverting TTV signals in a planetary system of two transiting planets. To the lowest order in planetary masses and eccentricities, TTVs can be expressed as a linear combination of three functions, which we call the TTV modes. These functions depend only on the planets’ linear ephemerides, and can be either constructed analytically, or by performing three orbital integrations of the three-body system. Given a TTV signal, the underlying physical parameters are found by decomposing the data as a sum of the TTV modes. We demonstrate the use of this method by inferring the mass and eccentricity of six Kepler planets that were previously characterized in other studies. Finally we discuss the implications and future prospects of our new method.

  20. Non-Gradient Blue Native Polyacrylamide Gel Electrophoresis.

    PubMed

    Luo, Xiaoting; Wu, Jinzi; Jin, Zhen; Yan, Liang-Jun

    2017-02-02

    Gradient blue native polyacrylamide gel electrophoresis (BN-PAGE) is a well established and widely used technique for activity analysis of high-molecular-weight proteins, protein complexes, and protein-protein interactions. Since its inception in the early 1990s, a variety of minor modifications have been made to this gradient gel analytical method. Here we provide a major modification of the method, which we call non-gradient BN-PAGE. The procedure, similar to that of non-gradient SDS-PAGE, is simple because there is no expensive gradient maker involved. The non-gradient BN-PAGE protocols presented herein provide guidelines on the analysis of mitochondrial protein complexes, in particular, dihydrolipoamide dehydrogenase (DLDH) and those in the electron transport chain. Protocols for the analysis of blood esterases or mitochondrial esterases are also presented. The non-gradient BN-PAGE method may be tailored for analysis of specific proteins according to their molecular weight regardless of whether the target proteins are hydrophobic or hydrophilic. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  1. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  2. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  3. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  4. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2012-04-01 2012-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  5. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  6. Towards automated human gait disease classification using phase space representation of intrinsic mode functions

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Patra, Sayantani; Pratiher, Souvik

    2017-06-01

    A novel analytical methodology for segregating healthy and neurological disorders from gait patterns is proposed by employing a set of oscillating components called intrinsic mode functions (IMF's). These IMF's are generated by the Empirical Mode Decomposition of the gait time series and the Hilbert transformed analytic signal representation forms the complex plane trace of the elliptical shaped analytic IMFs. The area measure and the relative change in the centroid position of the polygon formed by the Convex Hull of these analytic IMF's are taken as the discriminative features. Classification accuracy of 79.31% with Ensemble learning based Adaboost classifier validates the adequacy of the proposed methodology for a computer aided diagnostic (CAD) system for gait pattern identification. Also, the efficacy of several potential biomarkers like Bandwidth of Amplitude Modulation and Frequency Modulation IMF's and it's Mean Frequency from the Fourier-Bessel expansion from each of these analytic IMF's has been discussed for its potency in diagnosis of gait pattern identification and classification.

  7. Analytic information processing style in epilepsy patients.

    PubMed

    Buonfiglio, Marzia; Di Sabato, Francesco; Mandillo, Silvia; Albini, Mariarita; Di Bonaventura, Carlo; Giallonardo, Annateresa; Avanzini, Giuliano

    2017-08-01

    Relevant to the study of epileptogenesis is learning processing, given the pivotal role that neuroplasticity assumes in both mechanisms. Recently, evoked potential analyses showed a link between analytic cognitive style and altered neural excitability in both migraine and healthy subjects, regardless of cognitive impairment or psychological disorders. In this study we evaluated analytic/global and visual/auditory perceptual dimensions of cognitive style in patients with epilepsy. Twenty-five cryptogenic temporal lobe epilepsy (TLE) patients matched with 25 idiopathic generalized epilepsy (IGE) sufferers and 25 healthy volunteers were recruited and participated in three cognitive style tests: "Sternberg-Wagner Self-Assessment Inventory", the C. Cornoldi test series called AMOS, and the Mariani Learning style Questionnaire. Our results demonstrate a significant association between analytic cognitive style and both IGE and TLE and respectively a predominant auditory and visual analytic style (ANOVA: p values <0,0001). These findings should encourage further research to investigate information processing style and its neurophysiological correlates in epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Izacard, Olivier

    2016-08-01

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.

  9. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier, E-mail: izacard@llnl.gov

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  10. SAM Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation

  11. The contribution of lot-to-lot variation to the measurement uncertainty of an LC-MS-based multi-mycotoxin assay.

    PubMed

    Stadler, David; Sulyok, Michael; Schuhmacher, Rainer; Berthiller, Franz; Krska, Rudolf

    2018-05-01

    Multi-mycotoxin determination by LC-MS is commonly based on external solvent-based or matrix-matched calibration and, if necessary, the correction for the method bias. In everyday practice, the method bias (expressed as apparent recovery RA), which may be caused by losses during the recovery process and/or signal/suppression enhancement, is evaluated by replicate analysis of a single spiked lot of a matrix. However, RA may vary for different lots of the same matrix, i.e., lot-to-lot variation, which can result in a higher relative expanded measurement uncertainty (U r ). We applied a straightforward procedure for the calculation of U r from the within-laboratory reproducibility, which is also called intermediate precision, and the uncertainty of RA (u r,RA ). To estimate the contribution of the lot-to-lot variation to U r , the measurement results of one replicate of seven different lots of figs and maize and seven replicates of a single lot of these matrices, respectively, were used to calculate U r . The lot-to-lot variation was contributing to u r,RA and thus to U r for the majority of the 66 evaluated analytes in both figs and maize. The major contributions of the lot-to-lot variation to u r,RA were differences in analyte recovery in figs and relative matrix effects in maize. U r was estimated from long-term participation in proficiency test schemes with 58%. Provided proper validation, a fit-for-purpose U r of 50% was proposed for measurement results obtained by an LC-MS-based multi-mycotoxin assay, independent of the concentration of the analytes.

  12. Improved DNA hybridization parameters by Twisted Intercalating Nucleic Acid (TINA).

    PubMed

    Schneider, Uffe Vest

    2012-01-01

    This thesis establishes oligonucleotide design rules and applications of a novel group of DNA stabilizing molecules collectively called Twisted Intercalating Nucleic Acid - TINA. Three peer-reviewed publications form the basis for the thesis. One publication describes an improved and rapid method for determination of DNA melting points and two publications describe the effects of positioning TINA molecules in parallel triplex helix and antiparallel duplex helix forming DNA structures. The third publication establishes that TINA molecules containing oligonucleotides improve an antiparallel duplex hybridization based capture assay's analytical sensitivity compared to conventionel DNA oligonucleotides. Clinical microbiology is traditionally based on pathogenic microorganisms' culture and serological tests. The introduction of DNA target amplification methods like PCR has improved the analytical sensitivity and total turn around time involved in clinical diagnostics of infections. Due to the relatively weak hybridization between the two strands of double stranded DNA, a number of nucleic acid stabilizing molecules have been developed to improve the sensitivity of DNA based diagnostics through superior binding properties. A short introduction is given to Watson-Crick and Hoogsteen based DNA binding and the derived DNA structures. A number of other nucleic acid stabilizing molecules are described. The stabilizing effect of TINA molecules on different DNA structures is discussed and considered in relation to other nucleic acid stabilizing molecules and in relation to future use of TINA containing oligonucleotides in clinical diagnostics and therapy. In conclusion, design of TINA modified oligonucleotides for antiparallel duplex helixes and parallel triplex helixes follows simple purpose dependent rules. TINA molecules are well suited for improving multiplex PCR assays and can be used as part of novel technologies. Future research should test whether combinations of TINA molecules and other nucleic acid stabilizing molecules can increase analytical sensitivity whilst maintaining nucleobase mismatch discrimination in triplex helix based diagnostic assays.

  13. Adaptive Variable Bias Magnetic Bearing Control

    NASA Technical Reports Server (NTRS)

    Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.

    1998-01-01

    Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. With the existence of the bias current, even in no load conditions, there is always some power consumption. In aerospace applications, power consumption becomes an important concern. In response to this concern, an alternative magnetic bearing control method, called Adaptive Variable Bias Control (AVBC), has been developed and its performance examined. The AVBC operates primarily as a proportional-derivative controller with a relatively slow, bias current dependent, time-varying gain. The AVBC is shown to reduce electrical power loss, be nominally stable, and provide control performance similar to conventional bias control. Analytical, computer simulation, and experimental results are presented in this paper.

  14. Compact high order schemes with gradient-direction derivatives for absorbing boundary conditions

    NASA Astrophysics Data System (ADS)

    Gordon, Dan; Gordon, Rachel; Turkel, Eli

    2015-09-01

    We consider several compact high order absorbing boundary conditions (ABCs) for the Helmholtz equation in three dimensions. A technique called "the gradient method" (GM) for ABCs is also introduced and combined with the high order ABCs. GM is based on the principle of using directional derivatives in the direction of the wavefront propagation. The new ABCs are used together with the recently introduced compact sixth order finite difference scheme for variable wave numbers. Experiments on problems with known analytic solutions produced very accurate results, demonstrating the efficacy of the high order schemes, particularly when combined with GM. The new ABCs are then applied to the SEG/EAGE Salt model, showing the advantages of the new schemes.

  15. Isotope Inversion Experiment evaluating the suitability of calibration in surrogate matrix for quantification via LC-MS/MS-Exemplary application for a steroid multi-method.

    PubMed

    Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H

    2016-05-30

    For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. SAM Pathogen Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target pathogen analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select pathogens.

  17. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  18. Smartphone-based low light detection for bioluminescence application

    USDA-ARS?s Scientific Manuscript database

    We report a smartphone-based device and associated imaging-processing algorithm to maximize the sensitivity of standard smartphone cameras, that can detect the presence of single-digit pW of radiant flux intensity. The proposed hardware and software, called bioluminescent-based analyte quantitation ...

  19. Gradient-based multiconfiguration Shepard interpolation for generating potential energy surfaces for polyatomic reactions.

    PubMed

    Tishchenko, Oksana; Truhlar, Donald G

    2010-02-28

    This paper describes and illustrates a way to construct multidimensional representations of reactive potential energy surfaces (PESs) by a multiconfiguration Shepard interpolation (MCSI) method based only on gradient information, that is, without using any Hessian information from electronic structure calculations. MCSI, which is called multiconfiguration molecular mechanics (MCMM) in previous articles, is a semiautomated method designed for constructing full-dimensional PESs for subsequent dynamics calculations (classical trajectories, full quantum dynamics, or variational transition state theory with multidimensional tunneling). The MCSI method is based on Shepard interpolation of Taylor series expansions of the coupling term of a 2 x 2 electronically diabatic Hamiltonian matrix with the diagonal elements representing nonreactive analytical PESs for reactants and products. In contrast to the previously developed method, these expansions are truncated in the present version at the first order, and, therefore, no input of electronic structure Hessians is required. The accuracy of the interpolated energies is evaluated for two test reactions, namely, the reaction OH+H(2)-->H(2)O+H and the hydrogen atom abstraction from a model of alpha-tocopherol by methyl radical. The latter reaction involves 38 atoms and a 108-dimensional PES. The mean unsigned errors averaged over a wide range of representative nuclear configurations (corresponding to an energy range of 19.5 kcal/mol in the former case and 32 kcal/mol in the latter) are found to be within 1 kcal/mol for both reactions, based on 13 gradients in one case and 11 in the other. The gradient-based MCMM method can be applied for efficient representations of multidimensional PESs in cases where analytical electronic structure Hessians are too expensive or unavailable, and it provides new opportunities to employ high-level electronic structure calculations for dynamics at an affordable cost.

  20. Estimation of chromophoric dissolved organic matter in the Mississippi and Atchafalaya river plume regions using above-surface hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Zhu, Weining; Yu, Qian; Tian, Yong Q.; Chen, Robert F.; Gardner, G. Bernard

    2011-02-01

    A method for the inversion of hyperspectral remote sensing was developed to determine the absorption coefficient for chromophoric dissolved organic matter (CDOM) in the Mississippi and Atchafalaya river plume regions and the northern Gulf of Mexico, where water types vary from Case 1 to turbid Case 2. Above-surface hyperspectral remote sensing data were measured by a ship-mounted spectroradiometer and then used to estimate CDOM. Simultaneously, water absorption and attenuation coefficients, CDOM and chlorophyll fluorescence, turbidities, and other related water properties were also measured at very high resolution (0.5-2 m) using in situ, underwater, and flow-through (shipboard, pumped) optical sensors. We separate ag, the absorption coefficient a of CDOM, from adg (a of CDOM and nonalgal particles) based on two absorption-backscattering relationships. The first is between ad (a of nonalgal particles) and bbp (total particulate backscattering coefficient), and the second is between ap (a of total particles) and bbp. These two relationships are referred as ad-based and ap-based methods, respectively. Consequently, based on Lee's quasi-analytical algorithm (QAA), we developed the so-called Extended Quasi-Analytical Algorithm (QAA-E) to decompose adg, using both ad-based and ap-based methods. The absorption-backscattering relationships and the QAA-E were tested using synthetic and in situ data from the International Ocean-Colour Coordinating Group (IOCCG) as well as our own field data. The results indicate the ad-based method is relatively better than the ap-based method. The accuracy of CDOM estimation is significantly improved by separating ag from adg (R2 = 0.81 and 0.65 for synthetic and in situ data, respectively). The sensitivities of the newly introduced coefficients were also analyzed to ensure QAA-E is robust.

  1. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...

  2. SAM Biotoxin Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select biotoxins.

  3. SAM Chemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery

  4. On Connectivity of Wireless Sensor Networks with Directional Antennas

    PubMed Central

    Wang, Qiu; Dai, Hong-Ning; Zheng, Zibin; Imran, Muhammad; Vasilakos, Athanasios V.

    2017-01-01

    In this paper, we investigate the network connectivity of wireless sensor networks with directional antennas. In particular, we establish a general framework to analyze the network connectivity while considering various antenna models and the channel randomness. Since existing directional antenna models have their pros and cons in the accuracy of reflecting realistic antennas and the computational complexity, we propose a new analytical directional antenna model called the iris model to balance the accuracy against the complexity. We conduct extensive simulations to evaluate the analytical framework. Our results show that our proposed analytical model on the network connectivity is accurate, and our iris antenna model can provide a better approximation to realistic directional antennas than other existing antenna models. PMID:28085081

  5. Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, I. I.; Turaev, D. V.

    2017-01-01

    We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.

  6. Time-dependent structural transformation analysis to high-level Petri net model with active state transition diagram

    PubMed Central

    2010-01-01

    Background With an accumulation of in silico data obtained by simulating large-scale biological networks, a new interest of research is emerging for elucidating how living organism functions over time in cells. Investigating the dynamic features of current computational models promises a deeper understanding of complex cellular processes. This leads us to develop a method that utilizes structural properties of the model over all simulation time steps. Further, user-friendly overviews of dynamic behaviors can be considered to provide a great help in understanding the variations of system mechanisms. Results We propose a novel method for constructing and analyzing a so-called active state transition diagram (ASTD) by using time-course simulation data of a high-level Petri net. Our method includes two new algorithms. The first algorithm extracts a series of subnets (called temporal subnets) reflecting biological components contributing to the dynamics, while retaining positive mathematical qualities. The second one creates an ASTD composed of unique temporal subnets. ASTD provides users with concise information allowing them to grasp and trace how a key regulatory subnet and/or a network changes with time. The applicability of our method is demonstrated by the analysis of the underlying model for circadian rhythms in Drosophila. Conclusions Building ASTD is a useful means to convert a hybrid model dealing with discrete, continuous and more complicated events to finite time-dependent states. Based on ASTD, various analytical approaches can be applied to obtain new insights into not only systematic mechanisms but also dynamics. PMID:20356411

  7. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  8. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  9. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  10. Kinematic synthesis of adjustable robotic mechanisms

    NASA Astrophysics Data System (ADS)

    Chuenchom, Thatchai

    1993-01-01

    Conventional hard automation, such as a linkage-based or a cam-driven system, provides high speed capability and repeatability but not the flexibility required in many industrial applications. The conventional mechanisms, that are typically single-degree-of-freedom systems, are being increasingly replaced by multi-degree-of-freedom multi-actuators driven by logic controllers. Although this new trend in sophistication provides greatly enhanced flexibility, there are many instances where the flexibility needs are exaggerated and the associated complexity is unnecessary. Traditional mechanism-based hard automation, on the other hand, neither can fulfill multi-task requirements nor are cost-effective mainly due to lack of methods and tools to design-in flexibility. This dissertation attempts to bridge this technological gap by developing Adjustable Robotic Mechanisms (ARM's) or 'programmable mechanisms' as a middle ground between high speed hard automation and expensive serial jointed-arm robots. This research introduces the concept of adjustable robotic mechanisms towards cost-effective manufacturing automation. A generalized analytical synthesis technique has been developed to support the computational design of ARM's that lays the theoretical foundation for synthesis of adjustable mechanisms. The synthesis method developed in this dissertation, called generalized adjustable dyad and triad synthesis, advances the well-known Burmester theory in kinematics to a new level. While this method provides planar solutions, a novel patented scheme is utilized for converting prescribed three-dimensional motion specifications into sets of planar projections. This provides an analytical and a computational tool for designing adjustable mechanisms that satisfy multiple sets of three-dimensional motion specifications. Several design issues were addressed, including adjustable parameter identification, branching defect, and mechanical errors. An efficient mathematical scheme for identification of adjustable member was also developed. The analytical synthesis techniques developed in this dissertation were successfully implemented in a graphic-intensive user-friendly computer program. A physical prototype of a general purpose adjustable robotic mechanism has been constructed to serve as a proof-of-concept model.

  11. Simultaneous Genotype Calling and Haplotype Phasing Improves Genotype Accuracy and Reduces False-Positive Associations for Genome-wide Association Studies

    PubMed Central

    Browning, Brian L.; Yu, Zhaoxia

    2009-01-01

    We present a novel method for simultaneous genotype calling and haplotype-phase inference. Our method employs the computationally efficient BEAGLE haplotype-frequency model, which can be applied to large-scale studies with millions of markers and thousands of samples. We compare genotype calls made with our method to genotype calls made with the BIRDSEED, CHIAMO, GenCall, and ILLUMINUS genotype-calling methods, using genotype data from the Illumina 550K and Affymetrix 500K arrays. We show that our method has higher genotype-call accuracy and yields fewer uncalled genotypes than competing methods. We perform single-marker analysis of data from the Wellcome Trust Case Control Consortium bipolar disorder and type 2 diabetes studies. For bipolar disorder, the genotype calls in the original study yield 25 markers with apparent false-positive association with bipolar disorder at a p < 10−7 significance level, whereas genotype calls made with our method yield no associated markers at this significance threshold. Conversely, for markers with replicated association with type 2 diabetes, there is good concordance between genotype calls used in the original study and calls made by our method. Results from single-marker and haplotypic analysis of our method's genotype calls for the bipolar disorder study indicate that our method is highly effective at eliminating genotyping artifacts that cause false-positive associations in genome-wide association studies. Our new genotype-calling methods are implemented in the BEAGLE and BEAGLECALL software packages. PMID:19931040

  12. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    ERIC Educational Resources Information Center

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  13. Young doctors' problem solving strategies on call may be improved.

    PubMed

    Michelsen, Jens; Malchow-Møller, Axel; Charles, Peder; Eika, Berit

    2013-03-01

    The first year following graduation from medical school is challenging as learning from books changes to workplace-based learning. Analysis and reflection on experience may ease this transition. We used Significant Event Analysis (SEA) as a tool to explore what pre-registration house officers (PRHOs) consider successful and problematic events, and to identify what problem-solving strategies they employ. A senior house officer systematically led the PRHO through the SEA of one successful and one problematic event following a night call. The PRHO wrote answers to questions about diagnosis, what happened, how he or she contributed and what knowledge-gaining activities the PRHO would prioritise before the next call. By using an inductive, thematic data analysis, we identified five problem-solving strategies: non-analytical reasoning, analytical reasoning, communication with patients, communication with colleagues and professional behaviour. On average, 1.5 strategies were used in the successful events and 1.2 strategies in the problematic events. Most PRHOs were unable to suggest activities other than reading textbooks. SEA was valuable for the identification of PRHOs' problem-solving strategies in a natural setting. PRHOs should be assisted in increasing their repertoire of strategies, and they should also be helped to "learn to learn" as they were largely unable to point to new learning strategies. not relevant. not relevant.

  14. Personal exposure assessment to particulate metals using a paper-based analytical device

    NASA Astrophysics Data System (ADS)

    Cate, David; Volckens, John; Henry, Charles

    2013-03-01

    The development of a paper-based analytical device (PAD) for assessing personal exposure to particulate metals will be presented. Human exposure to metal aerosols, such as those that occur in the mining, construction, and manufacturing industries, has a significant impact on the health of our workforce, costing an estimated $10B in the U.S and causing approximately 425,000 premature deaths world-wide each year. Occupational exposure to particulate metals affects millions of individuals in manufacturing, construction (welding, cutting, blasting), and transportation (combustion, utility maintenance, and repair services) industries. Despite these effects, individual workers are rarely assessed for their exposure to particulate metals, due mainly to the high cost and effort associated with personal exposure measurement. Current exposure assessment methods for particulate metals call for an 8-hour filter sample, after which time, the filter sample is transported to a laboratory and analyzed by inductively-coupled plasma (ICP). The time from sample collection to reporting is typically weeks and costs several hundred dollars per sample. To exacerbate the issue, method detection limits suffer because of sample dilution during digestion. The lack of sensitivity hampers task-based exposure assessment, for which sampling times may be tens of minutes. To address these problems, and as a first step towards using microfluidics for personal exposure assessment, we have developed PADs for measurement of Pb, Cd, Cr, Fe, Ni, and Cu in aerosolized particulate matter.

  15. Best-Matched Internal Standard Normalization in Liquid Chromatography-Mass Spectrometry Metabolomics Applied to Environmental Samples.

    PubMed

    Boysen, Angela K; Heal, Katherine R; Carlson, Laura T; Ingalls, Anitra E

    2018-01-16

    The goal of metabolomics is to measure the entire range of small organic molecules in biological samples. In liquid chromatography-mass spectrometry-based metabolomics, formidable analytical challenges remain in removing the nonbiological factors that affect chromatographic peak areas. These factors include sample matrix-induced ion suppression, chromatographic quality, and analytical drift. The combination of these factors is referred to as obscuring variation. Some metabolomics samples can exhibit intense obscuring variation due to matrix-induced ion suppression, rendering large amounts of data unreliable and difficult to interpret. Existing normalization techniques have limited applicability to these sample types. Here we present a data normalization method to minimize the effects of obscuring variation. We normalize peak areas using a batch-specific normalization process, which matches measured metabolites with isotope-labeled internal standards that behave similarly during the analysis. This method, called best-matched internal standard (B-MIS) normalization, can be applied to targeted or untargeted metabolomics data sets and yields relative concentrations. We evaluate and demonstrate the utility of B-MIS normalization using marine environmental samples and laboratory grown cultures of phytoplankton. In untargeted analyses, B-MIS normalization allowed for inclusion of mass features in downstream analyses that would have been considered unreliable without normalization due to obscuring variation. B-MIS normalization for targeted or untargeted metabolomics is freely available at https://github.com/IngallsLabUW/B-MIS-normalization .

  16. Extracting Effective Higgs Couplings in the Golden Channel

    DOE PAGES

    Chen, Yi; Vega-Morales, Roberto

    2014-04-08

    Kinematic distributions in Higgs decays to four charged leptons, the so called ‘golden channel, are a powerful probe of the tensor structure of its couplings to neutral electroweak gauge bosons. In this study we construct the first part of a comprehensive analysis framework designed to maximize the information contained in this channel in order to perform direct extraction of the various possible Higgs couplings. We first complete an earlier analytic calculation of the leading order fully differential cross sections for the golden channel signal and background to include the 4e and 4μ final states with interference between identical final states.more » We also examine the relative fractions of the different possible combinations of scalar-tensor couplings by integrating the fully differential cross section over all kinematic variables as well as show various doubly differential spectra for both the signal and background. From these analytic expressions we then construct a ‘generator level’ analysis framework based on the maximum likelihood method. Then, we demonstrate the ability of our framework to perform multi-parameter extractions of all the possible effective couplings of a spin-0 scalar to pairs of neutral electroweak gauge bosons including any correlations. Furthermore, this framework provides a powerful method for study of these couplings and can be readily adapted to include the relevant detector and systematic effects which we demonstrate in an accompanying study to follow.« less

  17. Nanosatellite constellation deployment using on-board magnetic torquer interaction with space plasma

    NASA Astrophysics Data System (ADS)

    Park, Ji Hyun; Matsuzawa, Shinji; Inamori, Takaya; Jeung, In-Seuck

    2018-04-01

    One of the advantages that drive nanosatellite development is the potential of multi-point observation through constellation operation. However, constellation deployment of nanosatellites has been a challenge, as thruster operations for orbit maneuver were limited due to mass, volume, and power. Recently, a de-orbiting mechanism using magnetic torquer interaction with space plasma has been introduced, so-called plasma drag. As no additional hardware nor propellant is required, plasma drag has the potential in being used as constellation deployment method. In this research, a novel constellation deployment method using plasma drag is proposed. Orbit decay rate of the satellites in a constellation is controlled using plasma drag in order to achieve a desired phase angle and phase angle rate. A simplified 1D problem is formulated for an elementary analysis of the constellation deployment time. Numerical simulations are further performed for analytical analysis assessment and sensitivity analysis. Analytical analysis and numerical simulation results both agree that the constellation deployment time is proportional to the inverse square root of magnetic moment, the square root of desired phase angle and the square root of satellite mass. CubeSats ranging from 1 to 3 U (1-3 kg nanosatellites) are examined in order to investigate the feasibility of plasma drag constellation on nanosatellite systems. The feasibility analysis results show that plasma drag constellation is feasible on CubeSats, which open up the possibility of CubeSat constellation missions.

  18. Propulsion System Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Tai, Jimmy C. M.; McClure, Erin K.; Mavris, Dimitri N.; Burg, Cecile

    2002-01-01

    The Aerospace Systems Design Laboratory at the School of Aerospace Engineering in Georgia Institute of Technology has developed a core competency that enables propulsion technology managers to make technology investment decisions substantiated by propulsion and airframe technology system studies. This method assists the designer/manager in selecting appropriate technology concepts while accounting for the presence of risk and uncertainty as well as interactions between disciplines. This capability is incorporated into a single design simulation system that is described in this paper. This propulsion system design environment is created with a commercially available software called iSIGHT, which is a generic computational framework, and with analysis programs for engine cycle, engine flowpath, mission, and economic analyses. iSIGHT is used to integrate these analysis tools within a single computer platform and facilitate information transfer amongst the various codes. The resulting modeling and simulation (M&S) environment in conjunction with the response surface method provides the designer/decision-maker an analytical means to examine the entire design space from either a subsystem and/or system perspective. The results of this paper will enable managers to analytically play what-if games to gain insight in to the benefits (and/or degradation) of changing engine cycle design parameters. Furthermore, the propulsion design space will be explored probabilistically to show the feasibility and viability of the propulsion system integrated with a vehicle.

  19. FastChem: A computer program for efficient complex chemical equilibrium calculations in the neutral/ionized gas phase with applications to stellar and planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin

    2018-06-01

    For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.

  20. Puget Sound sediment-trap data: 1980-1985. Data report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, A.J.; Baker, E.T.; Feely, R.A.

    1991-12-01

    In 1979, scientists at the Pacific Marine Environmental Laboratory began investigating the sources, transformation, transport and fate of pollutants in Puget Sound and its watershed under Sec. 202 of the Marine Protection, Research and Sanctuaries Act of 1971 (P.L. 92-532) which called in part for '...a comprehensive and continuing program of research with respect to the possible long range effects of pollution, overfishing, and man-induced changes of ocean ecosystems...' The effort was called the Long-Range Effects Research Program (L-RERP) after language in the Act and was later called the PMEL Marine Environmental Quality Program. The Long-Range Effect Research Program consistedmore » of (1) sampling dissolved and particulate constituents in the water column by bottle sampling, (2) sampling settling particles by sediment trap and (3) sampling sediments by grab, box, gravity and Kasten corers. In the Data Report, a variety of data from particles collected in 104 traps deployed on 34 moorings in open waters between 1980 and 1985 are presented. The text of the data report begins with the sampling and analytical methods with the accompanying quality control/quality assurance data. The text of the data sections are a summary of the available data and published literature in which the data is interpreted along with a catalogue of the data available in the Appendix (on microfiche located in the back pocket of the data report).« less

  1. Prediction of dynamical systems by symbolic regression

    NASA Astrophysics Data System (ADS)

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  2. Multi-domain boundary element method for axi-symmetric layered linear acoustic systems

    NASA Astrophysics Data System (ADS)

    Reiter, Paul; Ziegelwanger, Harald

    2017-12-01

    Homogeneous porous materials like rock wool or synthetic foam are the main tool for acoustic absorption. The conventional absorbing structure for sound-proofing consists of one or multiple absorbers placed in front of a rigid wall, with or without air-gaps in between. Various models exist to describe these so called multi-layered acoustic systems mathematically for incoming plane waves. However, there is no efficient method to calculate the sound field in a half space above a multi layered acoustic system for an incoming spherical wave. In this work, an axi-symmetric multi-domain boundary element method (BEM) for absorbing multi layered acoustic systems and incoming spherical waves is introduced. In the proposed BEM formulation, a complex wave number is used to model absorbing materials as a fluid and a coordinate transformation is introduced which simplifies singular integrals of the conventional BEM to non-singular radial and angular integrals. The radial and angular part are integrated analytically and numerically, respectively. The output of the method can be interpreted as a numerical half space Green's function for grounds consisting of layered materials.

  3. HIGH-PRECISION ASTROMETRIC MILLIMETER VERY LONG BASELINE INTERFEROMETRY USING A NEW METHOD FOR ATMOSPHERIC CALIBRATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rioja, M.; Dodson, R., E-mail: maria.rioja@icrar.org

    2011-04-15

    We describe a new method which achieves high-precision very long baseline interferometry (VLBI) astrometry in observations at millimeter (mm) wavelengths. It combines fast frequency-switching observations, to correct for the dominant non-dispersive tropospheric fluctuations, with slow source-switching observations, for the remaining ionospheric dispersive terms. We call this method source-frequency phase referencing. Provided that the switching cycles match the properties of the propagation media, one can recover the source astrometry. We present an analytic description of the two-step calibration strategy, along with an error analysis to characterize its performance. Also, we provide observational demonstrations of a successful application with observations using themore » Very Long Baseline Array at 86 GHz of the pairs of sources 3C274 and 3C273 and 1308+326 and 1308+328 under various conditions. We conclude that this method is widely applicable to mm-VLBI observations of many target sources, and unique in providing bona fide astrometrically registered images and high-precision relative astrometric measurements in mm-VLBI using existing and newly built instruments, including space VLBI.« less

  4. Analytical analysis and implementation of a low-speed high-torque permanent magnet vernier in-wheel motor for electric vehicle

    NASA Astrophysics Data System (ADS)

    Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili

    2012-04-01

    In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.

  5. The National Shipbuilding Research Program, Analytical Quality Circles

    DTIC Science & Technology

    1986-09-01

    standard tools for quality control, in English, see “Guide to Quality Control” by Dr. Kaoru Ishikawa , Asian Productivity Organization, Aoyama Dai-ichi...factors affect work evaluation is shown schemati- cally by Characteristic-Factor Diagrams (also called Fishbone or Ishikawa Diagrams), see Figure 2-5

  6. Visualizing Qualitative Information

    ERIC Educational Resources Information Center

    Slone, Debra J.

    2009-01-01

    The abundance of qualitative data in today's society and the need to easily scrutinize, digest, and share this information calls for effective visualization and analysis tools. Yet, no existing qualitative tools have the analytic power, visual effectiveness, and universality of familiar quantitative instruments like bar charts, scatter-plots, and…

  7. 46 CFR 535.704 - Filing of minutes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... traffic; discussion of revenues, losses, or earnings; or discussion or agreement on service contract... compilation, analytical study, survey, or other work distributed, discussed, or exchanged at the meeting... discussions involve minor operational matters that have little or no impact on the frequency of vessel calls...

  8. Computer programs for calculating potential flow in propulsion system inlets

    NASA Technical Reports Server (NTRS)

    Stockman, N. O.; Button, S. L.

    1973-01-01

    In the course of designing inlets, particularly for VTOL and STOL propulsion systems, a calculational procedure utilizing three computer programs evolved. The chief program is the Douglas axisymmetric potential flow program called EOD which calculates the incompressible potential flow about arbitrary axisymmetric bodies. The other two programs, original with Lewis, are called SCIRCL AND COMBYN. Program SCIRCL generates input for EOD from various specified analytic shapes for the inlet components. Program COMBYN takes basic solutions output by EOD and combines them into solutions of interest, and applies a compressibility correction.

  9. A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.

    PubMed

    Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J

    2015-05-01

    In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Acoustic imaging of a duct spinning mode by the use of an in-duct circular microphone array.

    PubMed

    Wei, Qingkai; Huang, Xun; Peers, Edward

    2013-06-01

    An imaging method of acoustic spinning modes propagating within a circular duct simply with surface pressure information is introduced in this paper. The proposed method is developed in a theoretical way and is demonstrated by a numerical simulation case. Nowadays, the measurements within a duct have to be conducted using in-duct microphone array, which is unable to provide information of complete acoustic solutions across the test section. The proposed method can estimate immeasurable information by forming a so-called observer. The fundamental idea behind the testing method was originally developed in control theory for ordinary differential equations. Spinning mode propagation, however, is formulated in partial differential equations. A finite difference technique is used to reduce the associated partial differential equations to a classical form in control. The observer method can thereafter be applied straightforwardly. The algorithm is recursive and, thus, could be operated in real-time. A numerical simulation for a straight circular duct is conducted. The acoustic solutions on the test section can be reconstructed with good agreement to analytical solutions. The results suggest the potential and applications of the proposed method.

  11. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  12. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  13. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  14. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  15. 77 FR 41336 - Analytical Methods Used in Periodic Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-13

    ... Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission. ACTION: Notice of filing. SUMMARY... proceeding to consider changes in analytical methods used in periodic reporting. This notice addresses... informal rulemaking proceeding to consider changes in the analytical methods approved for use in periodic...

  16. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  17. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  18. Recently published analytical methods for determining alcohol in body materials : alcohol countermeasures literature review

    DOT National Transportation Integrated Search

    1974-10-01

    The author has brought the review of published analytical methods for determining alcohol in body materials up-to- date. The review deals with analytical methods for alcohol in blood and other body fluids and tissues; breath alcohol methods; factors ...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  20. Modeling the chemistry of complex petroleum mixtures.

    PubMed Central

    Quann, R J

    1998-01-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903

  1. Effects of high intensity exercise on isoelectric profiles and SDS-PAGE mobility of erythropoietin.

    PubMed

    Voss, S; Lüdke, A; Romberg, S; Schänzer, E; Flenker, U; deMarees, M; Achtzehn, S; Mester, J; Schänzer, W

    2010-06-01

    Exercise induced proteinuria is a common phenomenon in high performance sports. Based on the appearance of so called "effort urines" in routine doping analysis the purpose of this study was to investigate the influence of exercise induced proteinuria on IEF profiles and SDS-PAGE relative mobility values (rMVs) of endogenous human erythropoietin (EPO). Twenty healthy subjects performed cycle-ergometer exercise until exhaustion. VO (2)max, blood lactate, urinary proteins and urinary creatinine were analysed to evaluate the exercise performance and proteinuria. IEF and SDS-PAGE analyses were performed to test for differences in electrophoretic behaviour of the endogenous EPO before and after exercise. All subjects showed increased levels of protein/creatinine ratio after performance (8.8+/-5.2-26.1+/-14.4). IEF analysis demonstrated an elevation of the relative amount of basic band areas (13.9+/-11.3-36.4+/-12.6). Using SDS-PAGE analysis we observed a decrease in rMVs after exercise and no shift in direction of the recombinant human EPO (rhEPO) region (0.543+/-0.013-0.535+/-0.012). Following identification criteria of the World Anti Doping Agency (WADA) all samples were negative. The implementation of the SDS-PAGE method represents a good solution to distinguish between results influenced by so called effort urines and results of rhEPO abuse. Thus this method can be used to confirm adverse analytical findings.

  2. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  3. 40 CFR 141.25 - Analytical methods for radioactivity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods for radioactivity... § 141.25 Analytical methods for radioactivity. (a) Analysis for the following contaminants shall be conducted to determine compliance with § 141.66 (radioactivity) in accordance with the methods in the...

  4. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  5. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  6. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  7. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  8. Transforming the Enterprise of Acquiring Public Sector Complex Systems

    DTIC Science & Technology

    2006-04-30

    analytic equation that determines the value of a compound call option (see Geske , 1979; Cassimon et al., 2004). Another approach that is more...Park, MD: University of Maryland, Center for Public Policy and Private Enterprise. Geske , R. (1979). The valuation of compound options. Journal of

  9. Generation of gas-phase ions from charged clusters: an important ionization step causing suppression of matrix and analyte ions in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Lou, Xianwen; van Dongen, Joost L J; Milroy, Lech-Gustav; Meijer, E W

    2016-12-30

    Ionization in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a very complicated process. It has been reported that quaternary ammonium salts show extremely strong matrix and analyte suppression effects which cannot satisfactorily be explained by charge transfer reactions. Further investigation of the reasons causing these effects can be useful to improve our understanding of the MALDI process. The dried-droplet and modified thin-layer methods were used as sample preparation methods. In the dried-droplet method, analytes were co-crystallized with matrix, whereas in the modified thin-layer method analytes were deposited on the surface of matrix crystals. Model compounds, tetrabutylammonium iodide ([N(Bu) 4 ]I), cesium iodide (CsI), trihexylamine (THA) and polyethylene glycol 600 (PEG 600), were selected as the test analytes given their ability to generate exclusively pre-formed ions, protonated ions and metal ion adducts respectively in MALDI. The strong matrix suppression effect (MSE) observed using the dried-droplet method might disappear using the modified thin-layer method, which suggests that the incorporation of analytes in matrix crystals contributes to the MSE. By depositing analytes on the matrix surface instead of incorporating in the matrix crystals, the competition for evaporation/ionization from charged matrix/analyte clusters could be weakened resulting in reduced MSE. Further supporting evidence for this inference was found by studying the analyte suppression effect using the same two sample deposition methods. By comparing differences between the mass spectra obtained via the two sample preparation methods, we present evidence suggesting that the generation of gas-phase ions from charged matrix/analyte clusters may induce significant suppression of matrix and analyte ions. The results suggest that the generation of gas-phase ions from charged matrix/analyte clusters is an important ionization step in MALDI-MS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Selected Analytical Methods for Environmental Remediation and Recovery (SAM) - Home

    EPA Pesticide Factsheets

    The SAM Home page provides access to all information provided in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), and includes a query function allowing users to search methods by analyte, sample type and instrumentation.

  11. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  12. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  13. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  14. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  15. 75 FR 49930 - Stakeholder Meeting Regarding Re-Evaluation of Currently Approved Total Coliform Analytical Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-16

    ... Currently Approved Total Coliform Analytical Methods AGENCY: Environmental Protection Agency (EPA). ACTION... of currently approved Total Coliform Rule (TCR) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential elements of a method re-evaluation study, such as...

  16. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  17. The Matsu Wheel: A Cloud-Based Framework for Efficient Analysis and Reanalysis of Earth Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Patterson, Maria T.; Anderson, Nicholas; Bennett, Collin; Bruggemann, Jacob; Grossman, Robert L.; Handy, Matthew; Ly, Vuong; Mandl, Daniel J.; Pederson, Shane; Pivarski, James; hide

    2016-01-01

    Project Matsu is a collaboration between the Open Commons Consortium and NASA focused on developing open source technology for cloud-based processing of Earth satellite imagery with practical applications to aid in natural disaster detection and relief. Project Matsu has developed an open source cloud-based infrastructure to process, analyze, and reanalyze large collections of hyperspectral satellite image data using OpenStack, Hadoop, MapReduce and related technologies. We describe a framework for efficient analysis of large amounts of data called the Matsu "Wheel." The Matsu Wheel is currently used to process incoming hyperspectral satellite data produced daily by NASA's Earth Observing-1 (EO-1) satellite. The framework allows batches of analytics, scanning for new data, to be applied to data as it flows in. In the Matsu Wheel, the data only need to be accessed and preprocessed once, regardless of the number or types of analytics, which can easily be slotted into the existing framework. The Matsu Wheel system provides a significantly more efficient use of computational resources over alternative methods when the data are large, have high-volume throughput, may require heavy preprocessing, and are typically used for many types of analysis. We also describe our preliminary Wheel analytics, including an anomaly detector for rare spectral signatures or thermal anomalies in hyperspectral data and a land cover classifier that can be used for water and flood detection. Each of these analytics can generate visual reports accessible via the web for the public and interested decision makers. The result products of the analytics are also made accessible through an Open Geospatial Compliant (OGC)-compliant Web Map Service (WMS) for further distribution. The Matsu Wheel allows many shared data services to be performed together to efficiently use resources for processing hyperspectral satellite image data and other, e.g., large environmental datasets that may be analyzed for many purposes.

  18. ATTIRE (analytical tools for thermal infrared engineering): A sensor simulation and modeling package

    NASA Astrophysics Data System (ADS)

    Jaggi, S.

    1993-02-01

    The Advanced Sensor Development Laboratory (ASDL) at the Stennis Space Center develops, maintains and calibrates remote sensing instruments for the National Aeronautics & Space Administration (NASA). To perform system design trade-offs, analysis, and establish system parameters, ASDL has developed a software package for analytical simulation of sensor systems. This package called 'Analytical Tools for Thermal InfraRed Engineering' - ATTIRE, simulates the various components of a sensor system. The software allows each subsystem of the sensor to be analyzed independently for its performance. These performance parameters are then integrated to obtain system level information such as Signal-to-Noise Ratio (SNR), Noise Equivalent Radiance (NER), Noise Equivalent Temperature Difference (NETD) etc. This paper describes the uses of the package and the physics that were used to derive the performance parameters.

  19. The HVT technique and the 'uncertainty' relation for central potentials

    NASA Astrophysics Data System (ADS)

    Grypeos, M. E.; Koutroulos, C. G.; Oyewumi, K. J.; Petridou, Th

    2004-08-01

    The quantum mechanical hypervirial theorems (HVT) technique is used to treat the so-called 'uncertainty' relation for quite a general class of central potential wells, including the (reduced) Poeschl-Teller and the Gaussian one. It is shown that this technique is quite suitable in deriving an approximate analytic expression in the form of a truncated power series expansion for the dimensionless product Pnl equiv langr2rangnllangp2rangnl/planck2, for every (deeply) bound state of a particle moving non-relativistically in the well, provided that a (dimensionless) parameter s is sufficiently small. Attention is also paid to a number of cases, among the limited existing ones, in which exact analytic or semi-analytic expressions for Pnl can be derived. Finally, numerical results are given and discussed.

  20. Improving mapping and SNP-calling performance in multiplexed targeted next-generation sequencing

    PubMed Central

    2012-01-01

    Background Compared to classical genotyping, targeted next-generation sequencing (tNGS) can be custom-designed to interrogate entire genomic regions of interest, in order to detect novel as well as known variants. To bring down the per-sample cost, one approach is to pool barcoded NGS libraries before sample enrichment. Still, we lack a complete understanding of how this multiplexed tNGS approach and the varying performance of the ever-evolving analytical tools can affect the quality of variant discovery. Therefore, we evaluated the impact of different software tools and analytical approaches on the discovery of single nucleotide polymorphisms (SNPs) in multiplexed tNGS data. To generate our own test model, we combined a sequence capture method with NGS in three experimental stages of increasing complexity (E. coli genes, multiplexed E. coli, and multiplexed HapMap BRCA1/2 regions). Results We successfully enriched barcoded NGS libraries instead of genomic DNA, achieving reproducible coverage profiles (Pearson correlation coefficients of up to 0.99) across multiplexed samples, with <10% strand bias. However, the SNP calling quality was substantially affected by the choice of tools and mapping strategy. With the aim of reducing computational requirements, we compared conventional whole-genome mapping and SNP-calling with a new faster approach: target-region mapping with subsequent ‘read-backmapping’ to the whole genome to reduce the false detection rate. Consequently, we developed a combined mapping pipeline, which includes standard tools (BWA, SAMtools, etc.), and tested it on public HiSeq2000 exome data from the 1000 Genomes Project. Our pipeline saved 12 hours of run time per Hiseq2000 exome sample and detected ~5% more SNPs than the conventional whole genome approach. This suggests that more potential novel SNPs may be discovered using both approaches than with just the conventional approach. Conclusions We recommend applying our general ‘two-step’ mapping approach for more efficient SNP discovery in tNGS. Our study has also shown the benefit of computing inter-sample SNP-concordances and inspecting read alignments in order to attain more confident results. PMID:22913592

  1. The effect of viscoelasticity on the stress distribution of adhesively single-lap joint with an internal break in the composite adherends

    NASA Astrophysics Data System (ADS)

    Reza, Arash; Shishesaz, Mohammad

    2017-09-01

    The aim of this research is to study the effect of a break in the laminated composite adherends on stress distribution in the adhesively single-lap joint with viscoelastic adhesive and matrix. The proposed model involves two adherends with E-glass fibers and poly-methyl-methacrylate matrix that have been adhered to each other by phenolic-epoxy resin. The equilibrium equations that are based on shear-lag theory have been derived in the Laplace domain, and the governing differential equations of the model have been derived analytically in the Laplace domain. A numerical inverse Laplace transform, which is called Gaver-Stehfest method, has been used to extract desired results in the time domain. The results obtained at the initial time completely matched with the results of elastic solution. Also, a comparison between results obtained from the analytical and finite element models show a relatively good match. The results show that viscoelastic behavior decreases the peak of stress near the break. Finally, the effect of size and location of the break, as well as volume fraction of fibers, on the stress distribution in the adhesive layer is fully investigated.

  2. Cloud-based interactive analytics for terabytes of genomic variants data.

    PubMed

    Pan, Cuiping; McInnes, Gregory; Deflaux, Nicole; Snyder, Michael; Bingham, Jonathan; Datta, Somalee; Tsao, Philip S

    2017-12-01

    Large scale genomic sequencing is now widely used to decipher questions in diverse realms such as biological function, human diseases, evolution, ecosystems, and agriculture. With the quantity and diversity these data harbor, a robust and scalable data handling and analysis solution is desired. We present interactive analytics using a cloud-based columnar database built on Dremel to perform information compression, comprehensive quality controls, and biological information retrieval in large volumes of genomic data. We demonstrate such Big Data computing paradigms can provide orders of magnitude faster turnaround for common genomic analyses, transforming long-running batch jobs submitted via a Linux shell into questions that can be asked from a web browser in seconds. Using this method, we assessed a study population of 475 deeply sequenced human genomes for genomic call rate, genotype and allele frequency distribution, variant density across the genome, and pharmacogenomic information. Our analysis framework is implemented in Google Cloud Platform and BigQuery. Codes are available at https://github.com/StanfordBioinformatics/mvp_aaa_codelabs. cuiping@stanford.edu or ptsao@stanford.edu. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  3. Analytical and Experimental Investigation of Process Loads on Incremental Severe Plastic Deformation

    NASA Astrophysics Data System (ADS)

    Okan Görtan, Mehmet

    2017-05-01

    From the processing point of view, friction is a major problem in the severe plastic deformation (SPD) using equal channel angular pressing (ECAP) process. Incremental ECAP can be used in order to optimize frictional effects during SPD. A new incremental ECAP has been proposed recently. This new process called as equal channel angular swaging (ECAS) combines the conventional ECAP and the incremental bulk metal forming method rotary swaging. ECAS tool system consists of two dies with an angled channel that contains two shear zones. During ECAS process, two forming tool halves, which are concentrically arranged around the workpiece, perform high frequency radial movements with short strokes, while samples are pushed through these. The oscillation direction nearly coincides with the shearing direction in the workpiece. The most important advantages in comparison to conventional ECAP are a significant reduction in the forces in material feeding direction plus the potential to be extended to continuous processing. In the current study, the mechanics of the ECAS process is investigated using slip line field approach. An analytical model is developed to predict process loads. The proposed model is validated using experiments and FE simulations.

  4. Factor structure of the Halstead-Reitan Neuropsychological Battery for children: a brief report supplement.

    PubMed

    Ross, Sylvia An; Allen, Daniel N; Goldstein, Gerald

    2014-01-01

    The Halstead-Reitan Neuropsychological Battery (HRNB) is the first factor-analyzed neuropsychological battery and consists of three batteries for young children, older children, and adults. Halstead's original factor analysis extracted four factors from the adult version of the battery, which were the basis for his theory of biological intelligence. These factors were called Central Integrative Field, Abstraction, Power, and Directional. Since this original analysis, Reitan's additions to the battery, and the development of the child versions of the test, this factor-analytic research continued. An introduction and the adult literature are reviewed in Ross, Allen, and Goldstein ( in press ). In this supplemental article, factor-analytic studies of the HRNB with children are reviewed. It is concluded that factor analysis of the HRNB or Reitan-Indiana Neuropsychological Battery with children does not replicate the extensiveness of the adult literature, although there is some evidence that when the traditional battery for older children is used, the factor structure is similar to what is found in adult studies. Reitan's changes to the battery appear to have added factors including language and sensory-perceptual factors. When other tests and scoring methods are used in addition to the core battery, differing solutions are produced.

  5. Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Köcher, S. S.; Institute of Energy and Climate Research; Heydenreich, T.

    Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoreticallymore » predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.« less

  6. Analytical Modelling of a Refractive Index Sensor Based on an Intrinsic Micro Fabry-Perot Interferometer

    PubMed Central

    Vargas-Rodriguez, Everardo; Guzman-Chavez, Ana D.; Cano-Contreras, Martin; Gallegos-Arellano, Eloisa; Jauregui-Vazquez, Daniel; Hernández-García, Juan C.; Estudillo-Ayala, Julian M.; Rojas-Laguna, Roberto

    2015-01-01

    In this work a refractive index sensor based on a combination of the non-dispersive sensing (NDS) and the Tunable Laser Spectroscopy (TLS) principles is presented. Here, in order to have one reference and one measurement channel a single-beam dual-path configuration is used for implementing the NDS principle. These channels are monitored with a couple of identical optical detectors which are correlated to calculate the overall sensor response, called here the depth of modulation. It is shown that this is useful to minimize drifting errors due to source power variations. Furthermore, a comprehensive analysis of a refractive index sensing setup, based on an intrinsic micro Fabry-Perot Interferometer (FPI) is described. Here, the changes over the FPI pattern as the exit refractive index is varied are analytically modelled by using the characteristic matrix method. Additionally, our simulated results are supported by experimental measurements which are also provided. Finally it is shown that by using this principle a simple refractive index sensor with a resolution in the order of 2.15 × 10−4 RIU can be implemented by using a couple of standard and low cost photodetectors. PMID:26501277

  7. Analytical modelling of a refractive index sensor based on an intrinsic micro Fabry-Perot interferometer.

    PubMed

    Vargas-Rodriguez, Everardo; Guzman-Chavez, Ana D; Cano-Contreras, Martin; Gallegos-Arellano, Eloisa; Jauregui-Vazquez, Daniel; Hernández-García, Juan C; Estudillo-Ayala, Julian M; Rojas-Laguna, Roberto

    2015-10-15

    In this work a refractive index sensor based on a combination of the non-dispersive sensing (NDS) and the Tunable Laser Spectroscopy (TLS) principles is presented. Here, in order to have one reference and one measurement channel a single-beam dual-path configuration is used for implementing the NDS principle. These channels are monitored with a couple of identical optical detectors which are correlated to calculate the overall sensor response, called here the depth of modulation. It is shown that this is useful to minimize drifting errors due to source power variations. Furthermore, a comprehensive analysis of a refractive index sensing setup, based on an intrinsic micro Fabry-Perot Interferometer (FPI) is described. Here, the changes over the FPI pattern as the exit refractive index is varied are analytically modelled by using the characteristic matrix method. Additionally, our simulated results are supported by experimental measurements which are also provided. Finally it is shown that by using this principle a simple refractive index sensor with a resolution in the order of 2.15 × 10(-4) RIU can be implemented by using a couple of standard and low cost photodetectors.

  8. Wind Tunnel Database Development using Modern Experiment Design and Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2003-01-01

    A wind tunnel experiment for characterizing the aerodynamic and propulsion forces and moments acting on a research model airplane is described. The model airplane called the Free-flying Airplane for Sub-scale Experimental Research (FASER), is a modified off-the-shelf radio-controlled model airplane, with 7 ft wingspan, a tractor propeller driven by an electric motor, and aerobatic capability. FASER was tested in the NASA Langley 12-foot Low-Speed Wind Tunnel, using a combination of traditional sweeps and modern experiment design. Power level was included as an independent variable in the wind tunnel test, to allow characterization of power effects on aerodynamic forces and moments. A modeling technique that employs multivariate orthogonal functions was used to develop accurate analytic models for the aerodynamic and propulsion force and moment coefficient dependencies from the wind tunnel data. Efficient methods for generating orthogonal modeling functions, expanding the orthogonal modeling functions in terms of ordinary polynomial functions, and analytical orthogonal blocking were developed and discussed. The resulting models comprise a set of smooth, differentiable functions for the non-dimensional aerodynamic force and moment coefficients in terms of ordinary polynomials in the independent variables, suitable for nonlinear aircraft simulation.

  9. 8D likelihood effective Higgs couplings extraction framework in h → 4ℓ

    DOE PAGES

    Chen, Yi; Di Marco, Emanuele; Lykken, Joe; ...

    2015-01-23

    We present an overview of a comprehensive analysis framework aimed at performing direct extraction of all possible effective Higgs couplings to neutral electroweak gauge bosons in the decay to electrons and muons, the so called ‘golden channel’. Our framework is based primarily on a maximum likelihood method constructed from analytic expressions of the fully differential cross sections for h → 4l and for the dominant irreduciblemore » $$ q\\overline{q} $$ → 4l background, where 4l = 2e2μ, 4e, 4μ. Detector effects are included by an explicit convolution of these analytic expressions with the appropriate transfer function over all center of mass variables. Utilizing the full set of observables, we construct an unbinned detector-level likelihood which is continuous in the effective couplings. We consider possible ZZ, Zγ, and γγ couplings simultaneously, allowing for general CP odd/even admixtures. A broad overview is given of how the convolution is performed and we discuss the principles and theoretical basis of the framework. This framework can be used in a variety of ways to study Higgs couplings in the golden channel using data obtained at the LHC and other future colliders.« less

  10. Cloud-based interactive analytics for terabytes of genomic variants data

    PubMed Central

    Pan, Cuiping; McInnes, Gregory; Deflaux, Nicole; Snyder, Michael; Bingham, Jonathan; Datta, Somalee; Tsao, Philip S

    2017-01-01

    Abstract Motivation Large scale genomic sequencing is now widely used to decipher questions in diverse realms such as biological function, human diseases, evolution, ecosystems, and agriculture. With the quantity and diversity these data harbor, a robust and scalable data handling and analysis solution is desired. Results We present interactive analytics using a cloud-based columnar database built on Dremel to perform information compression, comprehensive quality controls, and biological information retrieval in large volumes of genomic data. We demonstrate such Big Data computing paradigms can provide orders of magnitude faster turnaround for common genomic analyses, transforming long-running batch jobs submitted via a Linux shell into questions that can be asked from a web browser in seconds. Using this method, we assessed a study population of 475 deeply sequenced human genomes for genomic call rate, genotype and allele frequency distribution, variant density across the genome, and pharmacogenomic information. Availability and implementation Our analysis framework is implemented in Google Cloud Platform and BigQuery. Codes are available at https://github.com/StanfordBioinformatics/mvp_aaa_codelabs. Contact cuiping@stanford.edu or ptsao@stanford.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28961771

  11. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  12. Lab-on-chip systems for integrated bioanalyses

    PubMed Central

    Madaboosi, Narayanan; Soares, Ruben R.G.; Fernandes, João Tiago S.; Novo, Pedro; Moulas, Geraud; Chu, Virginia

    2016-01-01

    Biomolecular detection systems based on microfluidics are often called lab-on-chip systems. To fully benefit from the miniaturization resulting from microfluidics, one aims to develop ‘from sample-to-answer’ analytical systems, in which the input is a raw or minimally processed biological, food/feed or environmental sample and the output is a quantitative or qualitative assessment of one or more analytes of interest. In general, such systems will require the integration of several steps or operations to perform their function. This review will discuss these stages of operation, including fluidic handling, which assures that the desired fluid arrives at a specific location at the right time and under the appropriate flow conditions; molecular recognition, which allows the capture of specific analytes at precise locations on the chip; transduction of the molecular recognition event into a measurable signal; sample preparation upstream from analyte capture; and signal amplification procedures to increase sensitivity. Seamless integration of the different stages is required to achieve a point-of-care/point-of-use lab-on-chip device that allows analyte detection at the relevant sensitivity ranges, with a competitive analysis time and cost. PMID:27365042

  13. A Practical Application of Value of Information and Prospective Payback of Research to Prioritize Evaluative Research.

    PubMed

    Andronis, Lazaros; Billingham, Lucinda J; Bryan, Stirling; James, Nicholas D; Barton, Pelham M

    2016-04-01

    Efforts to ensure that funded research represents "value for money" have led to increasing calls for the use of analytic methods in research prioritization. A number of analytic approaches have been proposed to assist research funding decisions, the most prominent of which are value of information (VOI) and prospective payback of research (PPoR). Despite the increasing interest in the topic, there are insufficient VOI and PPoR applications on the same case study to contrast their methods and compare their outcomes. We undertook VOI and PPoR analyses to determine the value of conducting 2 proposed research programs. The application served as a vehicle for identifying differences and similarities between the methods, provided insight into the assumptions and practical requirements of undertaking prospective analyses for research prioritization, and highlighted areas for future research. VOI and PPoR were applied to case studies representing proposals for clinical trials in advanced non-small-cell lung cancer and prostate cancer. Decision models were built to synthesize the evidence available prior to the funding decision. VOI (expected value of perfect and sample information) and PPoR (PATHS model) analyses were undertaken using the developed models. VOI and PPoR results agreed in direction, suggesting that the proposed trials would be cost-effective investments. However, results differed in magnitude, largely due to the way each method conceptualizes the possible outcomes of further research and the implementation of research results in practice. Compared with VOI, PPoR is less complex but requires more assumptions. Although the approaches are not free from limitations, they can provide useful input for research funding decisions. © The Author(s) 2015.

  14. Next Generation Sequence Analysis and Computational Genomics Using Graphical Pipeline Workflows

    PubMed Central

    Torri, Federica; Dinov, Ivo D.; Zamanyan, Alen; Hobel, Sam; Genco, Alex; Petrosyan, Petros; Clark, Andrew P.; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Knowles, James A.; Ames, Joseph; Kesselman, Carl; Toga, Arthur W.; Potkin, Steven G.; Vawter, Marquis P.; Macciardi, Fabio

    2012-01-01

    Whole-genome and exome sequencing have already proven to be essential and powerful methods to identify genes responsible for simple Mendelian inherited disorders. These methods can be applied to complex disorders as well, and have been adopted as one of the current mainstream approaches in population genetics. These achievements have been made possible by next generation sequencing (NGS) technologies, which require substantial bioinformatics resources to analyze the dense and complex sequence data. The huge analytical burden of data from genome sequencing might be seen as a bottleneck slowing the publication of NGS papers at this time, especially in psychiatric genetics. We review the existing methods for processing NGS data, to place into context the rationale for the design of a computational resource. We describe our method, the Graphical Pipeline for Computational Genomics (GPCG), to perform the computational steps required to analyze NGS data. The GPCG implements flexible workflows for basic sequence alignment, sequence data quality control, single nucleotide polymorphism analysis, copy number variant identification, annotation, and visualization of results. These workflows cover all the analytical steps required for NGS data, from processing the raw reads to variant calling and annotation. The current version of the pipeline is freely available at http://pipeline.loni.ucla.edu. These applications of NGS analysis may gain clinical utility in the near future (e.g., identifying miRNA signatures in diseases) when the bioinformatics approach is made feasible. Taken together, the annotation tools and strategies that have been developed to retrieve information and test hypotheses about the functional role of variants present in the human genome will help to pinpoint the genetic risk factors for psychiatric disorders. PMID:23139896

  15. Vibration band gaps for elastic metamaterial rods using wave finite element method

    NASA Astrophysics Data System (ADS)

    Nobrega, E. D.; Gautier, F.; Pelat, A.; Dos Santos, J. M. C.

    2016-10-01

    Band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators are investigated. New techniques to analyze metamaterial systems are using a combination of analytical or numerical method with wave propagation. One of them, called here wave spectral element method (WSEM), consists of combining the spectral element method (SEM) with Floquet-Bloch's theorem. A modern methodology called wave finite element method (WFEM), developed to calculate dynamic behavior in periodic acoustic and structural systems, utilizes a similar approach where SEM is substituted by the conventional finite element method (FEM). In this paper, it is proposed to use WFEM to calculate band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators of multi-degree-of-freedom (M-DOF). Simulated examples with band gaps generated by Bragg scattering and local resonators are calculated by WFEM and verified with WSEM, which is used as a reference method. Results are presented in the form of attenuation constant, vibration transmittance and frequency response function (FRF). For all cases, WFEM and WSEM results are in agreement, provided that the number of elements used in WFEM is sufficient to convergence. An experimental test was conducted with a real elastic metamaterial rod, manufactured with plastic in a 3D printer, without local resonance-type effect. The experimental results for the metamaterial rod with band gaps generated by Bragg scattering are compared with the simulated ones. Both numerical methods (WSEM and WFEM) can localize the band gap position and width very close to the experimental results. A hybrid approach combining WFEM with the commercial finite element software ANSYS is proposed to model complex metamaterial systems. Two examples illustrating its efficiency and accuracy to model an elastic metamaterial rod unit-cell using 1D simple rod element and 3D solid element are demonstrated and the results present good approximation to the experimental data.

  16. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  17. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  18. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  19. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, J.P.

    The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.

  1. Etude analytique et numérique de la réponse en vibration à hautes fréquences d'éprouvettes de fatigue vibratoire des métaux. Application aux aciers

    NASA Astrophysics Data System (ADS)

    Ben Aich, A.; El Kihel, B.; Kifani, A.; Sahban, F.

    1994-07-01

    In the present paper, the so-called " ultrasonic fatigue " or fatigue at very high frequency has been studied in the materials elastic behaviour case while neglecting the thermal effects that influence the mechanical fields. The determination of mechanical fields and specimen resonance length has been done both analytically and numerically. The numerical method used for this calculation is the finite element method (FEM). Martensitic steel " Soleil A2 " and austenitic steel " ICL 472 BC " have been considered in order to compare the two methods (analytical and numerical). It is shown that a perfect convergence is obtained between the two solutions. Dans le présent travail, la fatigue vibratoire a été étudiée dans le cas du comportement élastique des matériaux en négligeant les effets thermiques pouvant influencer les champs mécaniques. La détermination de ces champs et de la longueur de résonance des éprouvettes de fatigue a été faite analytiquement et numériquement. Le calcul numérique effectué se base sur la méthode des éléments finis. Dans le but d'une comparaison des solutions analytiques et numériques, deux aciers ont été considérés : un acier martensitique (Soleil A2) et un acier austénitique de type 18-10 (ICL 472 BC). Une parfaite convergence est obtenue entre les deux solutions.

  2. 75 FR 58416 - Statement of Organization, Functions and Delegations of Authority

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-24

    ..., customer service and system and analytical support. Chapter RU--Bureau of Clinician Recruitment and Service..., coordinates and evaluates Bureau-wide management activities; (3) maintains effective relationships within HRSA... including but not limited to the Bureau Web site, BCRS Call Center and customer service portal, and...

  3. The Work of High School Counselors' Leadership for Social Justice: An Analytic Autoethnography

    ERIC Educational Resources Information Center

    Griffin, Ramona H.

    2009-01-01

    With the American School Counselor Association's (ASCA) adoption of the National Model, school counselors are called to align their work with educational reform initiatives and provide leadership in public schools (Dollarhide, 2003). School counseling literature supporting leadership for social justice is frequently reiterated (Hatch & Bowers,…

  4. A Methodological Framework to Analyze Stakeholder Preferences and Propose Strategic Pathways for a Sustainable University

    ERIC Educational Resources Information Center

    Turan, Fikret Korhan; Cetinkaya, Saadet; Ustun, Ceyda

    2016-01-01

    Building sustainable universities calls for participative management and collaboration among stakeholders. Combining analytic hierarchy and network processes (AHP/ANP) with statistical analysis, this research proposes a framework that can be used in higher education institutions for integrating stakeholder preferences into strategic decisions. The…

  5. Analytic Hierarchy Process for Personalising Environmental Information

    ERIC Educational Resources Information Center

    Kabassi, Katerina

    2014-01-01

    This paper presents how a Geographical Information System (GIS) can be incorporated in an intelligent learning software system for environmental matters. The system is called ALGIS and incorporates the GIS in order to present effectively information about the physical and anthropogenic environment of Greece in a more interactive way. The system…

  6. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  7. Post-Secularism, Religious Knowledge and Religious Education

    ERIC Educational Resources Information Center

    Carr, David

    2012-01-01

    Post-secularism seems to follow in the wake of other (what are here called) "postal" perspectives--post-structuralism, postmodernism, post-empiricism, post-positivism, post-analytical philosophy, post-foundationalism and so on--in questioning or repudiating what it takes to be the epistemic assumptions of "modernism." To be sure, post-secularism…

  8. Individual Difference Relations in Psychometric and Experimental Cognitive Tasks

    DTIC Science & Technology

    1980-04-01

    underrepresented in the factor-analytic and correlational studies done to date. One such process is what is commonly called encoding (the process REPFRM de...IL 61820 Eugene OR 97403 1 ERIC Facility-Acquisitions 1 Dr. Barbara Hayes-Roth 41833 Rugby Avenue The Rand Corporation Bethesda, VAD 20014 1700 Main

  9. Engaging Business Students with Data Mining

    ERIC Educational Resources Information Center

    Brandon, Dan

    2016-01-01

    The Economist calls it "a golden vein", and many business experts now say it is the new science of winning. Business and technologists have many names for this new science, "business intelligence" (BI), " data analytics," and "data mining" are among the most common. The job market for people skilled in this…

  10. Corporate Multiculturalism, Diversity Management, and Positive Interculturalism in Irish Schools and Society

    ERIC Educational Resources Information Center

    Bryan, Audrey

    2010-01-01

    This article offers an empirical critique of recent social and educational policy responses to cultural diversity in an Irish context, with a particular focus on anti-racism, integration and intercultural education policies developed during the so-called "Celtic Tiger" era. Combining ethnographic and discourse analytic techniques, I…

  11. Games and Students: Creating Innovative Professionals

    ERIC Educational Resources Information Center

    Davis, Jason Stratton

    2011-01-01

    To create professionals for the future, who will be innovative and internationally competitive, we need to change the learning environment. The current traditional delivery systems of education do not develop the necessary interpersonal, analytical and creative skills to deal with the new knowledge economy. Baer (2005), in calling for a new model…

  12. Electron-Impact Total Ionization Cross Sections of Fluorine Compounds

    NASA Astrophysics Data System (ADS)

    Kim, Y.-K.; Ali, M. A.; Rudd, M. E.

    1997-10-01

    A theoretical method called the Binary-Encounter-Bethe (BEB) model(M. A. Ali, Y.-K. Kim, H. Hwang, N. M. Weinberger, and M. E. Rudd, J. Chem. Phys. 106), 9602 (1997), and references therein. that combines the Mott cross section at low incident energies T and the Bethe cross section at high T was applied to fluorine compounds of interest to plasma processing of semiconductors (CF_4, CHF_3, C_2F_6, C_4F_8, etc.). The theory provides total ioniztion cross sections in an analytic form from the threshold to a few keV in T, making it convenient to use the theory for modeling. The theory is particularly effective for closed-shell molecules. The theoretical cross sections are compared to available experimental data.

  13. Modular workcells: modern methods for laboratory automation.

    PubMed

    Felder, R A

    1998-12-01

    Laboratory automation is beginning to become an indispensable survival tool for laboratories facing difficult market competition. However, estimates suggest that only 8% of laboratories will be able to afford total laboratory automation systems. Therefore, automation vendors have developed alternative hardware configurations called 'modular automation', to fit the smaller laboratory. Modular automation consists of consolidated analyzers, integrated analyzers, modular workcells, and pre- and post-analytical automation. These terms will be defined in this paper. Using a modular automation model, the automated core laboratory will become a site where laboratory data is evaluated by trained professionals to provide diagnostic information to practising physicians. Modem software information management and process control tools will complement modular hardware. Proper standardization that will allow vendor-independent modular configurations will assure success of this revolutionary new technology.

  14. Characterization of photomultiplier tubes with a realistic model through GPU-boosted simulation

    NASA Astrophysics Data System (ADS)

    Anthony, M.; Aprile, E.; Grandi, L.; Lin, Q.; Saldanha, R.

    2018-02-01

    The accurate characterization of a photomultiplier tube (PMT) is crucial in a wide-variety of applications. However, current methods do not give fully accurate representations of the response of a PMT, especially at very low light levels. In this work, we present a new and more realistic model of the response of a PMT, called the cascade model, and use it to characterize two different PMTs at various voltages and light levels. The cascade model is shown to outperform the more common Gaussian model in almost all circumstances and to agree well with a newly introduced model independent approach. The technical and computational challenges of this model are also presented along with the employed solution of developing a robust GPU-based analysis framework for this and other non-analytical models.

  15. Telling Moments and Everyday Experience: Multiple Methods Research on Couple Relationships and Personal Lives

    PubMed Central

    Gabb, Jacqui; Fink, Janet

    2015-01-01

    Everyday moments and ordinary gestures create the texture of long-term couple relationships. In this article we demonstrate how, by refining our research tools and conceptual imagination, we can better understand these vibrant and visceral relationships. The ‘moments approach’ that we propose provides a lens through which to focus in on couples’ everyday experiences, to gain insight on processes, meanings and cross-cutting analytical themes whilst ensuring that feelings and emotionality remain firmly attached. Calling attention to everyday relationship practices, we draw on empirical research to illustrate and advance our conceptual and methodological argument. The Enduring Love? study included an online survey (n = 5445) and multi-sensory qualitative research with couples (n = 50) to interrogate how they experience, understand and sustain their long-term relationships. PMID:26456983

  16. Shortcuts to adiabaticity. Suppression of pair production in driven Dirac dynamics

    DOE PAGES

    Deffner, Sebastian

    2015-12-21

    By achieving effectively adiabatic dynamics in finite time, we have found that it is our ubiquitous goal in virtually all areas of modern physics. So-called shortcuts to adiabaticity refer to a set of methods and techniques that allow us to produce in a short time the same final state that would result from an adiabatic, infinitely slow process. In this paper we generalize one of these methods—the fast-forward technique—to driven Dirac dynamics. We find that our main result shortcuts to adiabaticity for the (1+1)-dimensional Dirac equation are facilitated by a combination of both scalar and pseudoscalar potentials. Our findings aremore » illustrated for two analytically solvable examples, namely charged particles driven in spatially homogeneous and linear vector fields.« less

  17. Visualizing Big Data Outliers through Distributed Aggregation.

    PubMed

    Wilkinson, Leland

    2017-08-29

    Visualizing outliers in massive datasets requires statistical pre-processing in order to reduce the scale of the problem to a size amenable to rendering systems like D3, Plotly or analytic systems like R or SAS. This paper presents a new algorithm, called hdoutliers, for detecting multidimensional outliers. It is unique for a) dealing with a mixture of categorical and continuous variables, b) dealing with big-p (many columns of data), c) dealing with big-n (many rows of data), d) dealing with outliers that mask other outliers, and e) dealing consistently with unidimensional and multidimensional datasets. Unlike ad hoc methods found in many machine learning papers, hdoutliers is based on a distributional model that allows outliers to be tagged with a probability. This critical feature reduces the likelihood of false discoveries.

  18. An Analytical Assessment of NASA's N(+)1 Subsonic Fixed Wing Project Noise Goal

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Envia, Edmane; Burley, Casey L.

    2010-01-01

    The Subsonic Fixed Wing Project of NASA s Fundamental Aeronautics Program has adopted a noise reduction goal for new, subsonic, single-aisle, civil aircraft expected to replace current 737 and A320 airplanes. These so-called "N+1" aircraft--designated in NASA vernacular as such since they will follow the current, in-service, "N" airplanes--are hoped to achieve certification noise goal levels of 32 cumulative EPNdB under current Stage 4 noise regulations. A notional, N+1, single-aisle, twinjet transport with ultrahigh bypass ratio turbofan engines is analyzed in this study using NASA software and methods. Several advanced noise-reduction technologies are empirically applied to the propulsion system and airframe. Certification noise levels are predicted and compared with the NASA goal.

  19. Developing an analytical tool for evaluating EMS system design changes and their impact on cardiac arrest outcomes: combining geographic information systems with register data on survival rates

    PubMed Central

    2013-01-01

    Background Out-of-hospital cardiac arrest (OHCA) is a frequent and acute medical condition that requires immediate care. We estimate survival rates from OHCA in the area of Stockholm, through developing an analytical tool for evaluating Emergency Medical Services (EMS) system design changes. The study also is an attempt to validate the proposed model used to generate the outcome measures for the study. Methods and results This was done by combining a geographic information systems (GIS) simulation of driving times with register data on survival rates. The emergency resources comprised ambulance alone and ambulance plus fire services. The simulation model predicted a baseline survival rate of 3.9 per cent, and reducing the ambulance response time by one minute increased survival to 4.6 per cent. Adding the fire services as first responders (dual dispatch) increased survival to 6.2 per cent from the baseline level. The model predictions were validated using empirical data. Conclusion We have presented an analytical tool that easily can be generalized to other regions or countries. The model can be used to predict outcomes of cardiac arrest prior to investment in EMS design changes that affect the alarm process, e.g. (1) static changes such as trimming the emergency call handling time or (2) dynamic changes such as location of emergency resources or which resources should carry a defibrillator. PMID:23415045

  20. Rational Selection, Criticality Assessment, and Tiering of Quality Attributes and Test Methods for Analytical Similarity Evaluation of Biosimilars.

    PubMed

    Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette

    2018-05-10

    Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.

  1. Simultaneous Spectrophotometric Determination of Rifampicin, Isoniazid and Pyrazinamide in a Single Step

    PubMed Central

    Asadpour-Zeynali, Karim; Saeb, Elhameh

    2016-01-01

    Three antituberculosis medications are investigated in this work consist of rifampicin, isoniazid and pyrazinamide. The ultra violet (UV) spectra of these compounds are overlapped, thus use of suitable chemometric methods are helpful for simultaneous spectrophotometric determination of them. A generalized version of net analyte signal standard addition method (GNASSAM) was used for determination of three antituberculosis medications as a model system. In generalized net analyte signal standard addition method only one standard solution was prepared for all analytes. This standard solution contains a mixture of all analytes of interest, and the addition of such solution to sample, causes increases in net analyte signal of each analyte which are proportional to the concentrations of analytes in added standards solution. For determination of concentration of each analyte in some synthetic mixtures, the UV spectra of pure analytes and each sample were recorded in the range of 210 nm-550 nm. The standard addition procedure was performed for each sample and the UV spectrum was recorded after each addition and finally the results were analyzed by net analyte signal method. Obtained concentrations show acceptable performance of GNASSAM in these cases. PMID:28243267

  2. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE PAGES

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    2017-01-30

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  3. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  4. A pressure-affected headspace-gas chromatography method for determining calcium carbonate content in paper sample.

    PubMed

    Dai, Yi; Yu, Zhen-Hua; Zhan, Jian-Bo; Chai, Xin-Sheng; Zhang, Shu-Xin; Xie, Wei-Qi; He, Liang

    2017-07-21

    The present work reports on the development of a pressure-affected based headspace (HS) analytical technique for the determination of calcium carbonate content in paper samples. By the acidification, the carbonate in the sample was converted to CO 2 and released into the headspace of a closed vial and then measured by gas chromatography (GC). When the amount of carbonate in the sample is significant, the pressure created by the CO 2 affects the accuracy of the method. However, the pressure also causes a change in the O 2 signal in the HS-GC measurement, which is a change that can be used as an indirect measure of the carbonate in the sample. The results show that the present method has a good precision (the relative standard deviation<2.32%), and good accuracy (the relative differences compared to a reference method was<5.76%). Coupled with the fact that the method is simple, rapid, and accurate, it is suitable for a variety of applications that call for the analysis of high carbonate content in paper samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. [Morphometry of pulmonary tissue: From manual to high throughput automation].

    PubMed

    Sallon, C; Soulet, D; Tremblay, Y

    2017-12-01

    Weibel's research has shown that any alteration of the pulmonary structure has effects on function. This demonstration required a quantitative analysis of lung structures called morphometry. This is possible thanks to stereology, a set of methods based on principles of geometry and statistics. His work has helped to better understand the morphological harmony of the lung, which is essential for its proper functioning. An imbalance leads to pathophysiology such as chronic obstructive pulmonary disease in adults and bronchopulmonary dysplasia in neonates. It is by studying this imbalance that new therapeutic approaches can be developed. These advances are achievable only through morphometric analytical methods, which are increasingly precise and focused, in particular thanks to the high-throughput automation of these methods. This review makes a comparison between an automated method that we developed in the laboratory and semi-manual methods of morphometric analyzes. The automation of morphometric measurements is a fundamental asset in the study of pulmonary pathophysiology because it is an assurance of robustness, reproducibility and speed. This tool will thus contribute significantly to the acceleration of the race for the development of new drugs. Copyright © 2017 SPLF. Published by Elsevier Masson SAS. All rights reserved.

  6. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  7. Method for reduction of selected ion intensities in confined ion beams

    DOEpatents

    Eiden, Gregory C.; Barinaga, Charles J.; Koppenaal, David W.

    1998-01-01

    A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer.

  8. Method for reduction of selected ion intensities in confined ion beams

    DOEpatents

    Eiden, G.C.; Barinaga, C.J.; Koppenaal, D.W.

    1998-06-16

    A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer. 7 figs.

  9. Method for Operating a Sensor to Differentiate Between Analytes in a Sample

    DOEpatents

    Kunt, Tekin; Cavicchi, Richard E; Semancik, Stephen; McAvoy, Thomas J

    1998-07-28

    Disclosed is a method for operating a sensor to differentiate between first and second analytes in a sample. The method comprises the steps of determining a input profile for the sensor which will enhance the difference in the output profiles of the sensor as between the first analyte and the second analyte; determining a first analyte output profile as observed when the input profile is applied to the sensor; determining a second analyte output profile as observed when the temperature profile is applied to the sensor; introducing the sensor to the sample while applying the temperature profile to the sensor, thereby obtaining a sample output profile; and evaluating the sample output profile as against the first and second analyte output profiles to thereby determine which of the analytes is present in the sample.

  10. On remembering: the notion of memory without recollection.

    PubMed

    Botella, César

    2014-10-01

    The author begins by attempting to evaluate the notions of memory and remembering, taking into account their evolution in Freud's work and the current debates on their relative importance in conducting an analytic treatment. This leads the author to develop an extension of the theory which none the less remains Freudian, by introducing a series of notions (the main ones being the work of figurability, regredience, state of session, negative of trauma, and memory without recollection), and arguing in favour of a principle of convergence-coherence governing mental life. His thesis is the following: analytic practice contains a dimension of an archaeological order, as Freud described it, as well as - thanks to the contribution of contemporary practice denouncing its insufficiency - the complementary need for the analyst to work in a particular way in the session - that is to say, one that involves what he calls a regredience of his or her thought processes, allowing him or her to gain access to early psychic zones beyond the zone of represented memories. This is what he calls transformational psychoanalysis, complementary to archeological psychoanalysis. The author's theoretical and practical developments are backed up by a personal schema of mental functioning, an extension of Freud's schema in 1900, and the detailed description of an analytic treatment, in particular, the central session which played a crucial role in the success of this analysis. Copyright © 2014 Institute of Psychoanalysis.

  11. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE PAGES

    Izacard, Olivier

    2016-08-02

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  12. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of selected carbamate pesticides in water by high-performance liquid chromatography

    USGS Publications Warehouse

    Werner, S.L.; Johnson, S.M.

    1994-01-01

    As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.

  13. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, N.B.; Walker, J.F.

    1990-01-01

    The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  14. PESTICIDE ANALYTICAL METHODS TO SUPPORT DUPLICATE-DIET HUMAN EXPOSURE MEASUREMENTS

    EPA Science Inventory

    Historically, analytical methods for determination of pesticides in foods have been developed in support of regulatory programs and are specific to food items or food groups. Most of the available methods have been developed, tested and validated for relatively few analytes an...

  15. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  16. Analytical solutions to compartmental indoor air quality models with application to environmental tobacco smoke concentrations measured in a house.

    PubMed

    Ott, Wayne R; Klepeis, Neil E; Switzer, Paul

    2003-08-01

    This paper derives the analytical solutions to multi-compartment indoor air quality models for predicting indoor air pollutant concentrations in the home and evaluates the solutions using experimental measurements in the rooms of a single-story residence. The model uses Laplace transform methods to solve the mass balance equations for two interconnected compartments, obtaining analytical solutions that can be applied without a computer. Environmental tobacco smoke (ETS) sources such as the cigarette typically emit pollutants for relatively short times (7-11 min) and are represented mathematically by a "rectangular" source emission time function, or approximated by a short-duration source called an "impulse" time function. Other time-varying indoor sources also can be represented by Laplace transforms. The two-compartment model is more complicated than the single-compartment model and has more parameters, including the cigarette or combustion source emission rate as a function of time, room volumes, compartmental air change rates, and interzonal air flow factors expressed as dimensionless ratios. This paper provides analytical solutions for the impulse, step (Heaviside), and rectangular source emission time functions. It evaluates the indoor model in an unoccupied two-bedroom home using cigars and cigarettes as sources with continuous measurements of carbon monoxide (CO), respirable suspended particles (RSP), and particulate polycyclic aromatic hydrocarbons (PPAH). Fine particle mass concentrations (RSP or PM3.5) are measured using real-time monitors. In our experiments, simultaneous measurements of concentrations at three heights in a bedroom confirm an important assumption of the model-spatial uniformity of mixing. The parameter values of the two-compartment model were obtained using a "grid search" optimization method, and the predicted solutions agreed well with the measured concentration time series in the rooms of the home. The door and window positions in each room had considerable effect on the pollutant concentrations observed in the home. Because of the small volumes and low air change rates of most homes, indoor pollutant concentrations from smoking activity in a home can be very high and can persist at measurable levels indoors for many hours.

  17. Major advances in testing of dairy products: milk component and dairy product attribute testing.

    PubMed

    Barbano, D M; Lynch, J M

    2006-04-01

    Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.

  18. 77 FR 56176 - Analytical Methods Used in Periodic Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... POSTAL REGULATORY COMMISSION 39 CFR Part 3001 [Docket No. RM2012-7; Order No. 1459] Analytical Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission. ACTION: Notice of proposed... analytical methods approved for use in periodic reporting.\\1\\ \\1\\ Petition of the United States Postal...

  19. Introduction to Validation of Analytical Methods: Potentiometric Determination of CO[subscript 2

    ERIC Educational Resources Information Center

    Hipólito-Nájera, A. Ricardo; Moya-Hernandez, M. Rosario; Gomez-Balderas, Rodolfo; Rojas-Hernandez, Alberto; Romero-Romo, Mario

    2017-01-01

    Validation of analytical methods is a fundamental subject for chemical analysts working in chemical industries. These methods are also relevant for pharmaceutical enterprises, biotechnology firms, analytical service laboratories, government departments, and regulatory agencies. Therefore, for undergraduate students enrolled in majors in the field…

  20. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  1. Analysis of Big Data in Gait Biomechanics: Current Trends and Future Directions.

    PubMed

    Phinyomark, Angkoon; Petri, Giovanni; Ibáñez-Marcelo, Esther; Osis, Sean T; Ferber, Reed

    2018-01-01

    The increasing amount of data in biomechanics research has greatly increased the importance of developing advanced multivariate analysis and machine learning techniques, which are better able to handle "big data". Consequently, advances in data science methods will expand the knowledge for testing new hypotheses about biomechanical risk factors associated with walking and running gait-related musculoskeletal injury. This paper begins with a brief introduction to an automated three-dimensional (3D) biomechanical gait data collection system: 3D GAIT, followed by how the studies in the field of gait biomechanics fit the quantities in the 5 V's definition of big data: volume, velocity, variety, veracity, and value. Next, we provide a review of recent research and development in multivariate and machine learning methods-based gait analysis that can be applied to big data analytics. These modern biomechanical gait analysis methods include several main modules such as initial input features, dimensionality reduction (feature selection and extraction), and learning algorithms (classification and clustering). Finally, a promising big data exploration tool called "topological data analysis" and directions for future research are outlined and discussed.

  2. A statistical method to estimate low-energy hadronic cross sections

    NASA Astrophysics Data System (ADS)

    Balassa, Gábor; Kovács, Péter; Wolf, György

    2018-02-01

    In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.

  3. Indetermination of particle sizing by laser diffraction in the anomalous size ranges

    NASA Astrophysics Data System (ADS)

    Pan, Linchao; Ge, Baozhen; Zhang, Fugen

    2017-09-01

    The laser diffraction method is widely used to measure particle size distributions. It is generally accepted that the scattering angle becomes smaller and the angles to the location of the main peak of scattered energy distributions in laser diffraction instruments shift to smaller values with increasing particle size. This specific principle forms the foundation of the laser diffraction method. However, this principle is not entirely correct for non-absorbing particles in certain size ranges and these particle size ranges are called anomalous size ranges. Here, we derive the analytical formulae for the bounds of the anomalous size ranges and discuss the influence of the width of the size segments on the signature of the Mie scattering kernel. This anomalous signature of the Mie scattering kernel will result in an indetermination of the particle size distribution when measured by laser diffraction instruments in the anomalous size ranges. By using the singular-value decomposition method we interpret the mechanism of occurrence of this indetermination in detail and then validate its existence by using inversion simulations.

  4. No Impact of the Analytical Method Used for Determining Cystatin C on Estimating Glomerular Filtration Rate in Children.

    PubMed

    Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T

    2017-01-01

    Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.

  5. General point dipole theory for periodic metasurfaces: magnetoelectric scattering lattices coupled to planar photonic structures.

    PubMed

    Chen, Yuntian; Zhang, Yan; Femius Koenderink, A

    2017-09-04

    We study semi-analytically the light emission and absorption properties of arbitrary stratified photonic structures with embedded two-dimensional magnetoelectric point scattering lattices, as used in recent plasmon-enhanced LEDs and solar cells. By employing dyadic Green's function for the layered structure in combination with the Ewald lattice summation to deal with the particle lattice, we develop an efficient method to study the coupling between planar 2D scattering lattices of plasmonic, or metamaterial point particles, coupled to layered structures. Using the 'array scanning method' we deal with localized sources. Firstly, we apply our method to light emission enhancement of dipole emitters in slab waveguides, mediated by plasmonic lattices. We benchmark the array scanning method against a reciprocity-based approach to find that the calculated radiative rate enhancement in k-space below the light cone shows excellent agreement. Secondly, we apply our method to study absorption-enhancement in thin-film solar cells mediated by periodic Ag nanoparticle arrays. Lastly, we study the emission distribution in k-space of a coupled waveguide-lattice system. In particular, we explore the dark mode excitation on the plasmonic lattice using the so-called array scanning method. Our method could be useful for simulating a broad range of complex nanophotonic structures, i.e., metasurfaces, plasmon-enhanced light emitting systems and photovoltaics.

  6. Inferring hidden causal relations between pathway members using reduced Google matrix of directed biological networks

    PubMed Central

    2018-01-01

    Signaling pathways represent parts of the global biological molecular network which connects them into a seamless whole through complex direct and indirect (hidden) crosstalk whose structure can change during development or in pathological conditions. We suggest a novel methodology, called Googlomics, for the structural analysis of directed biological networks using spectral analysis of their Google matrices, using parallels with quantum scattering theory, developed for nuclear and mesoscopic physics and quantum chaos. We introduce analytical “reduced Google matrix” method for the analysis of biological network structure. The method allows inferring hidden causal relations between the members of a signaling pathway or a functionally related group of genes. We investigate how the structure of hidden causal relations can be reprogrammed as a result of changes in the transcriptional network layer during cancerogenesis. The suggested Googlomics approach rigorously characterizes complex systemic changes in the wiring of large causal biological networks in a computationally efficient way. PMID:29370181

  7. Bayesian analysis of multimethod ego-depletion studies favours the null hypothesis.

    PubMed

    Etherton, Joseph L; Osborne, Randall; Stephenson, Katelyn; Grace, Morgan; Jones, Chas; De Nadai, Alessandro S

    2018-04-01

    Ego-depletion refers to the purported decrease in performance on a task requiring self-control after engaging in a previous task involving self-control, with self-control proposed to be a limited resource. Despite many published studies consistent with this hypothesis, recurrent null findings within our laboratory and indications of publication bias have called into question the validity of the depletion effect. This project used three depletion protocols involved three different depleting initial tasks followed by three different self-control tasks as dependent measures (total n = 840). For each method, effect sizes were not significantly different from zero When data were aggregated across the three different methods and examined meta-analytically, the pooled effect size was not significantly different from zero (for all priors evaluated, Hedges' g = 0.10 with 95% credibility interval of [-0.05, 0.24]) and Bayes factors reflected strong support for the null hypothesis (Bayes factor > 25 for all priors evaluated). © 2018 The British Psychological Society.

  8. Propeller noise prediction

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.

    1983-01-01

    Analytic propeller noise prediction involves a sequence of computations culminating in the application of acoustic equations. The prediction sequence currently used by NASA in its ANOPP (aircraft noise prediction) program is described. The elements of the sequence are called program modules. The first group of modules analyzes the propeller geometry, the aerodynamics, including both potential and boundary layer flow, the propeller performance, and the surface loading distribution. This group of modules is based entirely on aerodynamic strip theory. The next group of modules deals with the actual noise prediction, based on data from the first group. Deterministic predictions of periodic thickness and loading noise are made using Farassat's time-domain methods. Broadband noise is predicted by the semi-empirical Schlinker-Amiet method. Near-field predictions of fuselage surface pressures include the effects of boundary layer refraction and (for a cylinder) scattering. Far-field predictions include atmospheric and ground effects. Experimental data from subsonic and transonic propellers are compared and NASA's future direction is propeller noise technology development are indicated.

  9. On the importance of mathematical methods for analysis of MALDI-imaging mass spectrometry data.

    PubMed

    Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore

    2012-03-21

    In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 10⁸ to 10⁹ intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.

  10. On the Importance of Mathematical Methods for Analysis of MALDI-Imaging Mass Spectrometry Data.

    PubMed

    Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore

    2012-03-01

    In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 108 to 109 intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.

  11. Burstiness and tie activation strategies in time-varying social networks.

    PubMed

    Ubaldi, Enrico; Vezzani, Alessandro; Karsai, Márton; Perra, Nicola; Burioni, Raffaella

    2017-04-13

    The recent developments in the field of social networks shifted the focus from static to dynamical representations, calling for new methods for their analysis and modelling. Observations in real social systems identified two main mechanisms that play a primary role in networks' evolution and influence ongoing spreading processes: the strategies individuals adopt when selecting between new or old social ties, and the bursty nature of the social activity setting the pace of these choices. We introduce a time-varying network model accounting both for ties selection and burstiness and we analytically study its phase diagram. The interplay of the two effects is non trivial and, interestingly, the effects of burstiness might be suppressed in regimes where individuals exhibit a strong preference towards previously activated ties. The results are tested against numerical simulations and compared with two empirical datasets with very good agreement. Consequently, the framework provides a principled method to classify the temporal features of real networks, and thus yields new insights to elucidate the effects of social dynamics on spreading processes.

  12. Intrinsic ethics regarding integrated assessment models for climate management.

    PubMed

    Schienke, Erich W; Baum, Seth D; Tuana, Nancy; Davis, Kenneth J; Keller, Klaus

    2011-09-01

    In this essay we develop and argue for the adoption of a more comprehensive model of research ethics than is included within current conceptions of responsible conduct of research (RCR). We argue that our model, which we label the ethical dimensions of scientific research (EDSR), is a more comprehensive approach to encouraging ethically responsible scientific research compared to the currently typically adopted approach in RCR training. This essay focuses on developing a pedagogical approach that enables scientists to better understand and appreciate one important component of this model, what we call intrinsic ethics. Intrinsic ethical issues arise when values and ethical assumptions are embedded within scientific findings and analytical methods. Through a close examination of a case study and its application in teaching, namely, evaluation of climate change integrated assessment models, this paper develops a method and case for including intrinsic ethics within research ethics training to provide scientists with a comprehensive understanding and appreciation of the critical role of values and ethical choices in the production of research outcomes.

  13. A stabilized element-based finite volume method for poroelastic problems

    NASA Astrophysics Data System (ADS)

    Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo

    2018-07-01

    The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.

  14. Model reduction by trimming for a class of semi-Markov reliability models and the corresponding error bound

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Palumbo, Daniel L.

    1991-01-01

    Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.

  15. A simplified method of evaluating the stress wave environment of internal equipment

    NASA Technical Reports Server (NTRS)

    Colton, J. D.; Desmond, T. P.

    1979-01-01

    A simplified method called the transfer function technique (TFT) was devised for evaluating the stress wave environment in a structure containing internal equipment. The TFT consists of following the initial in-plane stress wave that propagates through a structure subjected to a dynamic load and characterizing how the wave is altered as it is transmitted through intersections of structural members. As a basis for evaluating the TFT, impact experiments and detailed stress wave analyses were performed for structures with two or three, or more members. Transfer functions that relate the wave transmitted through an intersection to the incident wave were deduced from the predicted wave response. By sequentially applying these transfer functions to a structure with several intersections, it was found that the environment produced by the initial stress wave propagating through the structure can be approximated well. The TFT can be used as a design tool or as an analytical tool to determine whether a more detailed wave analysis is warranted.

  16. Collective motion in prolate γ-rigid nuclei within minimal length concept via a quantum perturbation method

    NASA Astrophysics Data System (ADS)

    Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.

    2018-05-01

    Based on the minimal length concept, inspired by Heisenberg algebra, a closed analytical formula is derived for the energy spectrum of the prolate γ-rigid Bohr-Mottelson Hamiltonian of nuclei, within a quantum perturbation method (QPM), by considering a scaled Davidson potential in β shape variable. In the resulting solution, called X(3)-D-ML, the ground state and the first β-band are all studied as a function of the free parameters. The fact of introducing the minimal length concept with a QPM makes the model very flexible and a powerful approach to describe nuclear collective excitations of a variety of vibrational-like nuclei. The introduction of scaling parameters in the Davidson potential enables us to get a physical minimum of this latter in comparison with previous works. The analysis of the corrected wave function, as well as the probability density distribution, shows that the minimal length parameter has a physical upper bound limit.

  17. Multivariate longitudinal data analysis with censored and intermittent missing responses.

    PubMed

    Lin, Tsung-I; Lachos, Victor H; Wang, Wan-Lun

    2018-05-08

    The multivariate linear mixed model (MLMM) has emerged as an important analytical tool for longitudinal data with multiple outcomes. However, the analysis of multivariate longitudinal data could be complicated by the presence of censored measurements because of a detection limit of the assay in combination with unavoidable missing values arising when subjects miss some of their scheduled visits intermittently. This paper presents a generalization of the MLMM approach, called the MLMM-CM, for a joint analysis of the multivariate longitudinal data with censored and intermittent missing responses. A computationally feasible expectation maximization-based procedure is developed to carry out maximum likelihood estimation within the MLMM-CM framework. Moreover, the asymptotic standard errors of fixed effects are explicitly obtained via the information-based method. We illustrate our methodology by using simulated data and a case study from an AIDS clinical trial. Experimental results reveal that the proposed method is able to provide more satisfactory performance as compared with the traditional MLMM approach. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Burstiness and tie activation strategies in time-varying social networks

    NASA Astrophysics Data System (ADS)

    Ubaldi, Enrico; Vezzani, Alessandro; Karsai, Márton; Perra, Nicola; Burioni, Raffaella

    2017-04-01

    The recent developments in the field of social networks shifted the focus from static to dynamical representations, calling for new methods for their analysis and modelling. Observations in real social systems identified two main mechanisms that play a primary role in networks’ evolution and influence ongoing spreading processes: the strategies individuals adopt when selecting between new or old social ties, and the bursty nature of the social activity setting the pace of these choices. We introduce a time-varying network model accounting both for ties selection and burstiness and we analytically study its phase diagram. The interplay of the two effects is non trivial and, interestingly, the effects of burstiness might be suppressed in regimes where individuals exhibit a strong preference towards previously activated ties. The results are tested against numerical simulations and compared with two empirical datasets with very good agreement. Consequently, the framework provides a principled method to classify the temporal features of real networks, and thus yields new insights to elucidate the effects of social dynamics on spreading processes.

  19. A new numerical approximation of the fractal ordinary differential equation

    NASA Astrophysics Data System (ADS)

    Atangana, Abdon; Jain, Sonal

    2018-02-01

    The concept of fractal medium is present in several real-world problems, for instance, in the geological formation that constitutes the well-known subsurface water called aquifers. However, attention has not been quite devoted to modeling for instance, the flow of a fluid within these media. We deem it important to remind the reader that the concept of fractal derivative is not to represent the fractal sharps but to describe the movement of the fluid within these media. Since this class of ordinary differential equations is highly complex to solve analytically, we present a novel numerical scheme that allows to solve fractal ordinary differential equations. Error analysis of the method is also presented. Application of the method and numerical approximation are presented for fractal order differential equation. The stability and the convergence of the numerical schemes are investigated in detail. Also some exact solutions of fractal order differential equations are presented and finally some numerical simulations are presented.

  20. Systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a service representative

    DOEpatents

    Harris, Scott H.; Johnson, Joel A.; Neiswanger, Jeffery R.; Twitchell, Kevin E.

    2004-03-09

    The present invention includes systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a customer service representative. In one embodiment of the invention, a system configured to distribute a telephone call within a network includes a distributor adapted to connect with a telephone system, the distributor being configured to connect a telephone call using the telephone system and output the telephone call and associated data of the telephone call; and a plurality of customer service representative terminals connected with the distributor and a selected customer service representative terminal being configured to receive the telephone call and the associated data, the distributor and the selected customer service representative terminal being configured to synchronize, application of the telephone call and associated data from the distributor to the selected customer service representative terminal.

Top